This post is a write-up of a panel discussion held at EA Global: Boston 2023 (27–29 October). The panel was moderated by Matthew Gentzel. Matthew currently co-leads Longview Philanthropy’s program on nuclear weapons policy and co-manages the organization’s Nuclear Weapons Policy Fund.
He was joined by two other experts on US-China relations and related catastrophic risks:
Below is a transcript of the discussion, which we’ve lightly edited for clarity. The panelists covered the following main topics:
Opening remarks summarizing the panelists’ general views on the US-China relationship and related risks, with an initial focus on nuclear security before exploring other risks and dangerous technologies
How to address different norms around sharing information
Problems resulting from risk compensation
Quick takes on which risks are overhyped and which are underhyped
AI governance structures, the Chinese defense minister’s dismissal, and the US’s semiconductor export policies
Ideas for calibrating how the US cooperates and/or competes with China
Opening remarks
Matthew: We’ll start with opening remarks, then get into questions.
Tong: Thank you so much. I think the catastrophic risk between the US and China is increasing, not least because the chance of serious military conflict between the two sides — most likely arising from a Taiwan Strait scenario — is growing. And in a major military conflict, the risk of nuclear escalation is certainly there. In a mostly strained scenario, this could lead to a nuclear winter if there’s a massive nuclear exchange. Even a limited nuclear exchange or very serious conventional conflict between the two powers could destabilize the international geopolitical landscape and very negatively affect the normal development and progression of humanity.
In the long run, I worry that both sides are preparing for a worst-case scenario of major conflict with each other, leading to de facto war mobilization efforts. In the case of China, strategists in Beijing are still worried that there is going to be an eventual showdown between the two sides. And therefore, China is working on developing the necessary military capabilities for that eventuality. It is developing its economic capacity to withstand international economic sanctions and its capability to influence the international narrative to avoid political isolation in a major crisis. And those efforts are leading to incremental decoupling in the economic and technological domains, as well as to general decoupling of policy expert communities on the two sides.
As a result of this long-term competition and rivalry, I think long-term risks to humanity are generally downplayed. Part of China’s recent policy change is a very rapid increase of its nuclear weapons capability. This does not necessarily mean that China aims to use nuclear weapons first in a future conflict. However, as China focuses on enhancing its nuclear and strategic military capabilities, it is paying less attention to the risks associated with such development. One example is China’s increasing interest in having launch-under-attack or launch-on-warning nuclear capability. That means China will depart from its decades-long practice of maintaining a low-level status for its nuclear forces and shift towards a rapid-response posture, in which China’s early warning system will provide Chinese leadership with a warning of any incoming missile attack. Before the incoming missiles arrive in China, and before the nuclear war has detonated over Chinese territory, Chinese leadership would have the opportunity to make a decision to launch a nuclear retaliation. So, in the near- to mid-term future, it is likely that both the US and China will have launch-on-warning capabilities. I think that increases the risk of nuclear conflict between the two powers more than anything else, particularly because of the unique geography in Northeast Asia. An incoming American nuclear attack against North Korea might very well appear to a Chinese early-warning system to be an incoming American nuclear attack against Northeast China. That could lead to a misunderstanding and a Chinese overreaction.
Also, a recent Department of Defense report on Chinese military power points out that China appears to be interested in developing conventional ICBM, which stands for intercontinental ballistic missile capability. If the US detects an incoming Chinese ICBM attack, it may be hard for the US to know whether it’s a nuclear attack or a conventional attack, making America’s launch-on-warning capability very dangerous.
Additionally, both countries are working on developing new technologies like hypersonic missiles that can maneuver and change their trajectory during flight, further complicating both sides’ ability to accurately understand the intended destination of an incoming attack. And AI is another technology that both countries might increasingly use to help them develop situational awareness around the nature of an incoming attack and make decisions on how to retaliate. All of these developments make nuclear use more dangerous and more likely than before.
The risk is made worse not only by China’s nuclear buildup and the interactive nuclear arms racing dynamic between the two countries, but also by the US’s lack of understanding of the drivers behind Chinese policy change. Yes, China has the ambition to become a world-class military power and have much greater nuclear capability. But, on the other side of the same coin is China’s increasing fear. China has developed an increasingly serious existential threat perception towards the United States. China thinks that the US is becoming more hostile towards it. And that requires China to demonstrate a stronger strategic capability to counter the perceived American hostility. Why has China developed a greater fear when China’s capability is increasing? That’s a separate issue that we don’t have time to discuss here. However, the American reaction to Chinese nuclear and military development appears to lack a good understanding of Chinese thinking.
It also lacks careful consideration of the most sensible way to respond. In fact, the current American thinking about how to respond runs the risk of enhancing Chinese fear and encouraging China to further invest in its nuclear and strategic military buildup. And American countermeasures, such as deploying nuclear forces to places near China and showing greater interest in concepts like “left of launch” — which means using both kinetic and non-kinetic means to interfere with China’s nuclear command control system — could actually make the situation even more volatile and prone to nuclear escalation.
One important obstacle is that despite this growth of risk, China still rejects cooperative efforts to manage and reduce it, because China has the fear that efforts to manage risk and escalation could make it safer for the US to provoke military conflicts and embolden American military aggressiveness. Therefore, we haven’t seen concrete efforts between the two sides to even discuss this very serious situation.
At the same time, we have to understand that there are profound internal changes occurring within China. China’s leadership has been increasingly personalistic and focused on significantly concentrating power within the country’s decision-making system. There is very serious politicization of China’s policymaking system. There’s greater emphasis on regime security, which leads to securitization of non-security issues. There is much greater secrecy. The Chinese experts, civilian and military, are increasingly marginalized in China’s internal policy deliberation and policy thinking. All of those internal changes raise increasing questions about the quality of China’s strategic decision-making, and there is a higher risk of incoherence in China’s policymaking. For example, we see Chinese nuclear forces increasingly talking about winning strategic victories, which seems to indicate that they are embracing nuclear war-fighting doctrine. But it’s hard to tell whether that’s the case, or whether the nuclear force officials are simply trying to show political loyalty to Mr. Xi, who has personally stressed the importance of being generally more prepared for military warfare.
There’s also the impact of greater disinformation. A common misunderstanding in the international community is that China is promoting propaganda. But in fact, the problem goes much deeper, because China itself appears to genuinely believe in the disinformation. Even policy and technical experts do. We see many Chinese biosecurity experts who genuinely believe there is truth to the Russian claim that the US has been using labs in Ukraine and other countries to secretly develop biological weapons. That’s another major challenge.
Similarly, with AI and autonomous weapons, Chinese experts tend to use worst-case-scenario thinking to evaluate what the US is doing when gauging the incorporation of these new technologies into its military capabilities. So, how will both sides avoid developing misunderstandings about each other’s capabilities? Policy is becoming increasingly challenging, especially when some of the technologies like AI — an algorithm — are not visible and are difficult to verify.
My last point is that the overall politicization of China’s domestic decision-making system is also making China much more willing to protect anything related to national security — and less transparent about sharing any information on events related to national security. If there is another pandemic in the future, I very much worry that China will have less incentive to share information than it did during the COVID pandemic.
All of these challenges are evident. I’m happy to talk about what can be done to mitigate them later in the discussion. Thank you.
Matthew: Thank you so much. Over to you, Bill.
Bill: It’s great to be with you all. Thank you for inviting me. I want to preface my remarks by clarifying that I’m a generalist in these matters; my expertise isn’t nuclear-specific. Tong’s expertise far outweighs my own there. I’m going to touch on nuclear risk a bit, but also on bio and AI risk. I should also add a final caveat, which is that I am famous — or infamous — at my think tank for being perhaps the biggest China hawk. But I think I’m right, so let’s see how we feel at the end of this conversation.
I’ll start with the nuclear threat because it influences the latter two risks. We just got a great overview and I don’t have too much to add. To state the obvious, I worry a lot about nuclear issues with China, but also more broadly about what’s happening there and with the US’s relationship with China. We are at a moment where the memory of what nuclear weapons can do has faded considerably in the general world population, which is, I think, an underappreciated fact. The degree to which the Cuban Missile Crisis was instrumental in establishing a lot of our nuclear controls, and a lot of the sobriety in political leadership, can be easy to forget. And a lot of that sobriety has waned over the last few decades, and particularly since the Cold War. The result is that nuclear controls are breaking down. We’ve seen that with the US-Russian Anti-Ballistic Missile Treaty of 2002, the Intermediate-Range Nuclear Forces Treaty of 2019, and the Open Skies Treaty. What controls remain don’t look great. And the ecosystem as a whole just doesn’t look promising.
Maybe most concerning is that the strategies around nuclear use around the world are more and more unhinged. People point to different beginnings for this particular trend, often centered on India and Pakistan. Pakistan has pioneered a strategy of overcoming a disadvantage in conventional weapons by using nukes as a backstop. The idea is to say they can fight on a conventional level, but if it gets too bad, they will brandish nuclear capabilities and threaten to use them. We’ve seen Russia do its own version of this with Ukraine. Some people would say this goes back to the NATO strategy early in the Cold War of overcoming a conventional deficit vis-a-vis Russia by threatening nuclear use on a tactical or smaller level. The idea is that basically, if it looks like a country or group of allies is going to lose a conventional war, they will make the war look so scary that people will stop — and they’ll make it scary by threatening nukes, which, obviously, is a scary prospect for good reason.
China has not subscribed to this strategy. But the strategy introduces a lot of instability in a world where nuclear use feels more possible and when there’s a growing understanding that this strategy is employable. My view here might provide a different rendering of some of the issues that Tong has laid out. China is rapidly expanding its arsenal. The US was already freaking out about that, and then the expansion significantly passed how much we thought it was expanding. So, it’s pretty dramatic. We like to separate out issues like nuclear, climate, and a few others that affect all of humanity. And if you talk to US State Department folks, they tend to express a lot of frustration that all such issues get subordinated to broader geopolitical goals when dealing with China. Nuclear is no exception. And then diplomacy with China involving other risky areas of tech, which I’ll turn to in a minute, have more or less fallen off a cliff. So in debris and those types of issues, there’s a lot of frustration that talks have more or less broken down. There’s a lot of distrust. We obviously all know what happened with bio risks. There’s a lot of concern that China refuses to answer the crisis phone when things happen. We saw that with the balloon incident, and with other incidents as well.
As a final point on the nuclear issue, I’d like to reiterate what Tong mentioned about the change in political culture in China that we’ve seen. It’s more secretive, it’s more ossified, it’s more personalistic. I also think it is a return to a political ethos that prizes ruthlessness, domestically and internationally. And when you talk about nuclear weapons, ruthlessness is not a political virtue that we like to see front and center. That’s my take.
I understand that this crowd already has a lot of concern over AI — I think rightfully so. I’m probably less bullish on some of the big concerns than a lot of people in this crowd have. Nonetheless, my own theory and main message on AI is that every AI risk is significantly more acute in China. I can go into the specific reasons why, but at least in 2022, according to an Ipsos survey, America was among the least optimistic nations in the world on the effects of AI. We already have a lot of misgivings, and a certain paranoia or fear of failure, which a lot of the safety literature will tell you is essential to avoid catastrophes. China, on the other hand, was the most optimistic society in the world on AI. Not only that, but historically, Chinese AI developers have really taken pride in the fact that their government is willing to stomach more risk and collateral damage on tech issues.
I think one of the main reasons why perceptions of AI in the US and China are so disparate is due to internal Chinese politics. I’ll give three predominant factors. One is disaster amnesia. When very bad tech catastrophes happen in China, they are very quickly repressed and covered up. We routinely see falsified statistics and sometimes denial that tragedies have happened at all. The most extreme example is that, by conservative estimates, 200,000 individuals were killed by radiation poisoning from Chinese nuclear tests in the 1990s. The Chinese government still will not acknowledge this. There are a lot of other instances as well, including the Chinese milk formula scandal and the Wenzhou train crash.
A second thing that exacerbates the country’s amnesia is large-scale state ambitions that tend to invite disaster. China has explicitly stated that it plans to overtake the US in AI by 2030 and to become the world leader in biotech by 2035. China is really invested in these sorts of ambitious goals on time horizons that require a lot of quick movement. We’ve seen that regularly — and then catastrophe hits, the most severe example obviously being the Great Leap Forward, where a desire to leapfrog Western nations in steel production led to the largest famine in human history. Obviously, we’re not in Mao’s time, so a lot of things are different. But we’ve also seen repeats of this kind of dynamic of really ambitious, state-driven goals resulting in calamity in other areas. For example, there has been the one-child policy with the demographic imbalance that now plagues the country. A smaller case would be the satellite launch industry, which also ended in a lot of tragedy in the 1990s. The Belt and Road Initiative today has some features of this, where there are some large-scale projects that are literally falling apart. But more to the point, it has ratcheted up financial instability in developing countries around the world to a much greater degree than most people expected.
And then the third, and maybe the biggest, reason why we should be worried is authoritarian crisis mismanagement. Again, the most obvious example is COVID. But there are a lot more. I think one of the best ones is HIV in the 1990s. There was an outbreak for several years that infected at least a million individuals. And there were just layers of government cover ups and obfuscation. The problem intensified as a result. Yet the party leaders who oversaw the situation ended up being promoted, even after the effects of the cover up were known. So, the system really incentivizes avoiding early intervention that allows disasters to snowball into catastrophes. SARS was another example. And what makes COVID especially egregious is that after SARS, the government spent $850 million to create public health reporting mechanisms that would supposedly overcome this autocratic instinct to cover up disasters and let them snowball. And despite that $850 initiative, COVID ended up playing out in almost exactly the same way. The problem is pretty baked into the system.
If AI, or bio tools or labs, went haywire in China, I think the odds of early intervention to avert catastrophe would be low. With bio, I think the state has been extremely ambitious. It is harvesting genetic data on an industrial scale, both in China and around the world. But the safety record is pretty dramatic. So there’s obviously the COVID case, which is politicized, but you can probably intuit what I think happened there. There are other instances as well. A Beijing bio lab had four SARS leaks — two in the same year. And the largest known lab leak so far was Lanzhou, which leaked aerosolized Brucella, leading to more than 10,000 individuals contracting the disease. Between 1975 and 2016, we have recorded between 60 and 70 accidental lab incidents that resulted in exposure to a highly contagious pathogen. Disproportionately those are from the US and Europe, but that is most likely because those are the places that will report such incidents. The fact that some — and arguably all — of the most egregious cases have come from China suggests that there’s a lot more going on there that we should be worried about. We only hear about the ones that are so bad that they can’t be covered up.
On AI, it’s more preliminary and prospective. But I did have a friend who was speaking to a machine learning professor at Tsinghua University, which is my friend’s and my alma mater and one of the preeminent institutions in China in the area. And the professor said something about how they view AI as the most transformative technology since nuclear weapons, and therefore, they would like to be “the first to detonate.” They weren’t trying to be provocative, but I think it does communicate the immense amount of techno optimism — which goes hand-in-hand with little appreciation for the risks — in China. It stems from this disaster amnesia, China’s really big goals, and its development history. My fear is that if you mix all of that together, you have a catastrophe waiting to happen.
How to address different norms around sharing information
Matthew: Thank you both for your opening remarks. We have a lot of potential content to cover. The first question is about informational asymmetry — the differences between open and closed societies and the ways in which the US and Chinese governments process information. This seems like a potential source of conflict. For instance, China may be concerned that increased transparency will be a disproportionate disadvantage when it comes to nuclear weapons. How do we address this issue of informational asymmetry? How do we get the two governments to have conversations and reduce risk when there’s this big difference in the environment and norms around information in each society?
Tong: Basically, there are both tactical and strategic measures that are necessary. Tactical measures are, of course, easier. They involve promoting expert-level exchanges to make sure that at least Chinese nuclear, AI, and biosecurity policy experts are better informed of the potential risks and don’t develop serious misunderstandings about American policy or capabilities. That’s already very hard to do because of the tightening of security rules. In China, experts face greater difficulty traveling internationally and meeting with foreigners. But still, I think more frequent, substantive exchanges among experts are the most straightforward way to produce a positive near-term effect. There are many historical examples of false warnings and technical or operational errors leading to catastrophe. US experts can certainly share those with their Chinese counterparts. And once China develops a greater appreciation of the risks, it would be more incentivized to adopt unilateral measures to prevent those risks (even if it still rejects quantitative measures).
However, and more importantly, those tactical measures can only work to a certain extent, because catastrophes are more likely to happen in a closed, authoritarian system like China’s. We are facing a growing information perception gap between China and Western societies. And nuclear policy researchers like me are aware that nuclear deterrence is not perfect; it is error-prone. It is supposed to be a temporary solution that gives us time to work on our underlying systemic disagreements and problems. But right now, we are not really thinking about how to address those underlying problems and help China appreciate that the information restriction is fostering misunderstanding because there is a feedback loop misinforming the Chinese leaders themselves. That’s a serious and detrimental impact of information restriction. We need to think about measures to address these underlying challenges.
Bill: I think that was a great overview. I have little to add, except maybe to say that while there is incremental progress from dialogues — and some receptiveness on the Chinese side to us taking safety overtures in autonomous weapons or what have you, which they’re matching — on the whole, I’m pessimistic that communications are working. And there can be a temptation to think that if we just got the communication piece right, things would be better. That thinking is most absurd in calls for more crisis hotlines. We already have two, both of which are defunct. Adding a third won’t help.
This may run counter to the fear of the Chinese misinterpreting our intentions. But if we are living in an age of defunct diplomacy or discourse on these issues, I think the top priority should be ensuring that deterrence is really, really strong. Right now, that is how to avoid situations where there’s uncertainty about what might happen.
Problems resulting from risk compensation
Matthew: My next question is how do you think about problems resulting from risk compensation? You’ve both hit on this theme in different ways. One potential outcome of risk compensation is not responding to crisis communication hotlines. Another could be perceiving certain risk-reduction measures as guardrails for dangerous behavior. For example, China does not want US communications technology in the South China Sea. There are examples like this where, on the one hand, something seems as if it would directly reduce risk. But on the other hand, there are incentives from a competitive standpoint to either not adopt these risk-reduction measures, or if they occur, to engage in different behavior from what one would have engaged in otherwise because of the increased stability the measures bring. What are some ways to address this kind of strategic behavior? And are there areas where you’re not really worried about it?
Bill: That’s a really hard question. I am worried about it. I think we simply have reached a point in our relationship where, with quite a lot of the crisis communications, we psych ourselves out. There are endless psych-out cycles on both sides. It seems counterproductive. The only thing that seems to change the game is just facts on the ground in terms of power. And, unfortunately, for me, that’s just where we are. But I fear that’s not a very sophisticated or inspiring answer.
Tong: Well, one major challenge is the two sides fundamentally disagree about who is provoking the crisis. They need to not only talk about crisis communication mechanisms, but also have a candid conversation about which behaviors are destabilizing and irresponsible. They need to establish common views on what those behaviors are. I think that because the Chinese expert community is more exposed to international thinking, they are more accepting of many of the crisis management measures. It is the Chinese political leaders who are more interested in deliberately using risk manipulation to achieve broader security policy goals. It’s hard to actually talk with Chinese senior leaders in a substantive way. We can only talk with the more open-minded Chinese experts.
I think that’s worth doing. We can share the urgency of some of the crisis prevention and management measures. Although Chinese experts are less appreciative of certain risks involving new technologies and new escalation pathways, if we demonstrate those to them in a stronger manner, that could help cultivate consensus among the expert community over time. Then, they would be able to indirectly influence the thinking of the political leaders. That’s the best we can do at this moment.
Overhyped and underhyped risks
Matthew: Thank you. This question round will be rapid-fire. I’ll go through several different potential threats and ask you whether you consider each threat to be overhyped or underhyped relative to public perceptions.
First, do you think AI and nuclear command-and-control is an overrated or underrated threat?
Bill: In public discourse, it’s underrated. In expert discourse, I think it’s overrated.
Tong: I think it’s overrated in the sense that both American and Chinese policymakers must be very aware of the potential risks.
Matthew: What about AI-enabled targeting or strategic warning systems that enable one to find nuclear forces or get early warning of potential strategic intention? I don’t mean early warning in the sense of detecting a launch, just that there’s movement.
Bill: So far, I think that’s an overrated threat.
Tong: It’s hard to tell. There’s just not much discussion on that issue.
Matthew: The next threat is AI-enabled bio weapons or just bio weapons in general — would you say that’s over- or underrated?
Bill: Bio weapons, as opposed to biological leaks? I guess the potential is there, and I’m really worried about that, but it’s not imminent. But it could be crazy.
Tong: I think the risk is underrated, at least in the Chinese case. China tends to overestimate what the US is doing regarding using new biotechnology to develop military capabilities.
Matthew: Next on the list is hypersonic missiles: overhyped or underhyped?
Bill: Among the public, I think underhyped.
Tong: I think it’s both. It’s overrated by the public in terms of the perception about the revolutionary impact. But it’s underrated in that many policy experts still lack full appreciation of the potential escalation risks.
Matthew: What about missile defenses, both in the US and China, in terms of strategic stability and things of that nature?
Tong: I think it’s overrated. Countries tend to exaggerate the impact of missile defense on strategic stability.
Bill: I don’t know enough on that topic to have an opinion.
Matthew: The last one I’ll ask is more about different sources of threats. Some people focus on arms racing, some people focus on nuclear blackmail. Which are you more worried about?
Tong: For me, it’s interrelated. Arms racing leads to riskier nuclear postures and makes nuclear use in a crisis more likely to happen.
Bill: Yeah, I think I feel similarly. I mostly worry about how this could push up inadvertent escalation cycles.
AI governance structures, the Chinese defense minister’s dismissal, and the US’s semiconductor export policies
Matthew: All right, thank you. How optimistic are you about AI governance structures that include both the US and China, compared to structures that are more focused on coalition-building — for example, the US structuring cooperation among its allies?
Bill: I’m definitely more optimistic about coalition-building among allies having appreciable effects. I think that there is momentum and seemingly some receptivity to set some guardrails between the US, China, and the rest of the world. But I think setting agreements is way overhyped. Following those agreements is different, and something I’m less less bullish on.
Matthew: What’s your read on the recent dismissal of China’s defense minister?
Tong: We just don’t know the real reason. Corruption is the most likely theory. The fact that we don’t know basically shows that there’s increasing opacity in Chinese high-level politics. If corruption is the reason, that’s indicative of a system that is increasingly secretive and eliminates internal checks and balances, which not only fosters greater room for corruption, but more importantly makes China less able to make coherent and consistent policies. I think that’s a greater concern than simply corruption.
Bill: There’s that great quote from Churchill that I can’t recall verbatim, but it’s something about how communist leadership struggles are like bulldogs fighting under a rug. You don’t know what’s happened, but you see the bones fly out. And we’ve seen a lot of bones fly out in recent months. I don’t know what’s going on under the rug. But the frequency and the level of dismissals is really conspicuous.
Matthew: What is the impact of the US’s semiconductor export control strategy and policies in recent years? Are they effective? Is there a backfire risk? How do you view the balance of the benefits and costs of these sorts of tech control policies?
Bill: I think they are working. And I think they have a runway to keep working, but that doesn’t mean they will always work. Whether or not it will backfire depends on three very opaque variables. One is the speed of AI progress. Second is the speed of China’s indigenization efforts. And third is if there are any game-changing algorithmic advancements that will change the needs around using existing chips or different types of chips to get to similar ends. All three of those things are very hard to assess.
Tong: If I may add a few words to an excellent answer, I think the US policy lacks clarity about what it wants China to do to avoid tighter export control rules. Yes, the US is unhappy about China’s civil-military fusion, but what specific measures should China take to make the US less concerned? That’s unclear. It’s hard for China to know what it should do. And although the US policy tries to limit China’s military development rather than undermine its civil, technological development, I think the US fails to draw a line between those two objectives. Therefore, to China, it appears that the US is undermining China’s overall growth.
Some US strategists think that should be the goal: to comprehensively undermine China’s technological competitiveness and its economic development. But if that’s the goal, then the overall impact on US-China relations is totally unclear. Maybe export controls can slow down and undermine China’s competitiveness. But China already has so many internal challenges, as Bill explained. The secrecy of the authoritarian system creates profound internal challenges. Do we really need to create extra difficulty for China that will thwart its development? I wonder if the US really needs tight export control rules, because China is already facing so many internal challenges.
More broadly, the goal should be to promote a more open and liberal Chinese society that is friendly to Western countries and the international community. But if you are taking measures that are going to alienate the Chinese civil society, I think you will probably have a counterproductive impact.
Ideas for calibrating how the US cooperates and/or competes with China
Matthew: Thank you. We’re running short on time, so I’m going to ask one more question. Are there areas where US policy toward China should be more cooperative, and competition less intense? Where should competition be more intense? And where should it be more conditional, so as to provide better incentives? It’s a bit of a complicated question.
Bill: I can share the main things I would do. First, I would increase our deterrence in Taiwan as much as possible. That way, there would be no ambiguity that could spiral. I guess that is more on the competition side. Also, I would turn up recruitment of AI and bioengineers from China to 11, but try to retain them in the United States.
One area I see as under conceptualized is bio risk. I don’t know whether we need more cooperation or more competition, but we need to think about it more. It hasn’t been explored enough, I think, in either direction.
Tong: I think the US should have tailored, nuanced policies with clear and justifiable goals, and focus on the overall objective of fostering a more liberal and open Chinese system society. The US should consider not only what benefits the US in regard to our long-term competitiveness, but how to create a collaborative and open Chinese society that fundamentally reduces bilateral risks and makes peace and stability more sustainable.
Matthew: Thank you so much. With that, we’ll close out this session.
The US-China Relationship and Catastrophic Risk (EAG Boston transcript)
Introduction
This post is a write-up of a panel discussion held at EA Global: Boston 2023 (27–29 October). The panel was moderated by Matthew Gentzel. Matthew currently co-leads Longview Philanthropy’s program on nuclear weapons policy and co-manages the organization’s Nuclear Weapons Policy Fund.
He was joined by two other experts on US-China relations and related catastrophic risks:
Tong Zhao, Senior Fellow for the Nuclear Policy Program and Carnegie China, Carnegie Endowment for International Peace
Bill Drexel, Fellow for the Technology and National Security Program, Center for a New American Security
Below is a transcript of the discussion, which we’ve lightly edited for clarity. The panelists covered the following main topics:
Opening remarks summarizing the panelists’ general views on the US-China relationship and related risks, with an initial focus on nuclear security before exploring other risks and dangerous technologies
How to address different norms around sharing information
Problems resulting from risk compensation
Quick takes on which risks are overhyped and which are underhyped
AI governance structures, the Chinese defense minister’s dismissal, and the US’s semiconductor export policies
Ideas for calibrating how the US cooperates and/or competes with China
Opening remarks
Matthew: We’ll start with opening remarks, then get into questions.
Tong: Thank you so much. I think the catastrophic risk between the US and China is increasing, not least because the chance of serious military conflict between the two sides — most likely arising from a Taiwan Strait scenario — is growing. And in a major military conflict, the risk of nuclear escalation is certainly there. In a mostly strained scenario, this could lead to a nuclear winter if there’s a massive nuclear exchange. Even a limited nuclear exchange or very serious conventional conflict between the two powers could destabilize the international geopolitical landscape and very negatively affect the normal development and progression of humanity.
In the long run, I worry that both sides are preparing for a worst-case scenario of major conflict with each other, leading to de facto war mobilization efforts. In the case of China, strategists in Beijing are still worried that there is going to be an eventual showdown between the two sides. And therefore, China is working on developing the necessary military capabilities for that eventuality. It is developing its economic capacity to withstand international economic sanctions and its capability to influence the international narrative to avoid political isolation in a major crisis. And those efforts are leading to incremental decoupling in the economic and technological domains, as well as to general decoupling of policy expert communities on the two sides.
As a result of this long-term competition and rivalry, I think long-term risks to humanity are generally downplayed. Part of China’s recent policy change is a very rapid increase of its nuclear weapons capability. This does not necessarily mean that China aims to use nuclear weapons first in a future conflict. However, as China focuses on enhancing its nuclear and strategic military capabilities, it is paying less attention to the risks associated with such development. One example is China’s increasing interest in having launch-under-attack or launch-on-warning nuclear capability. That means China will depart from its decades-long practice of maintaining a low-level status for its nuclear forces and shift towards a rapid-response posture, in which China’s early warning system will provide Chinese leadership with a warning of any incoming missile attack. Before the incoming missiles arrive in China, and before the nuclear war has detonated over Chinese territory, Chinese leadership would have the opportunity to make a decision to launch a nuclear retaliation. So, in the near- to mid-term future, it is likely that both the US and China will have launch-on-warning capabilities. I think that increases the risk of nuclear conflict between the two powers more than anything else, particularly because of the unique geography in Northeast Asia. An incoming American nuclear attack against North Korea might very well appear to a Chinese early-warning system to be an incoming American nuclear attack against Northeast China. That could lead to a misunderstanding and a Chinese overreaction.
Also, a recent Department of Defense report on Chinese military power points out that China appears to be interested in developing conventional ICBM, which stands for intercontinental ballistic missile capability. If the US detects an incoming Chinese ICBM attack, it may be hard for the US to know whether it’s a nuclear attack or a conventional attack, making America’s launch-on-warning capability very dangerous.
Additionally, both countries are working on developing new technologies like hypersonic missiles that can maneuver and change their trajectory during flight, further complicating both sides’ ability to accurately understand the intended destination of an incoming attack. And AI is another technology that both countries might increasingly use to help them develop situational awareness around the nature of an incoming attack and make decisions on how to retaliate. All of these developments make nuclear use more dangerous and more likely than before.
The risk is made worse not only by China’s nuclear buildup and the interactive nuclear arms racing dynamic between the two countries, but also by the US’s lack of understanding of the drivers behind Chinese policy change. Yes, China has the ambition to become a world-class military power and have much greater nuclear capability. But, on the other side of the same coin is China’s increasing fear. China has developed an increasingly serious existential threat perception towards the United States. China thinks that the US is becoming more hostile towards it. And that requires China to demonstrate a stronger strategic capability to counter the perceived American hostility. Why has China developed a greater fear when China’s capability is increasing? That’s a separate issue that we don’t have time to discuss here. However, the American reaction to Chinese nuclear and military development appears to lack a good understanding of Chinese thinking.
It also lacks careful consideration of the most sensible way to respond. In fact, the current American thinking about how to respond runs the risk of enhancing Chinese fear and encouraging China to further invest in its nuclear and strategic military buildup. And American countermeasures, such as deploying nuclear forces to places near China and showing greater interest in concepts like “left of launch” — which means using both kinetic and non-kinetic means to interfere with China’s nuclear command control system — could actually make the situation even more volatile and prone to nuclear escalation.
One important obstacle is that despite this growth of risk, China still rejects cooperative efforts to manage and reduce it, because China has the fear that efforts to manage risk and escalation could make it safer for the US to provoke military conflicts and embolden American military aggressiveness. Therefore, we haven’t seen concrete efforts between the two sides to even discuss this very serious situation.
At the same time, we have to understand that there are profound internal changes occurring within China. China’s leadership has been increasingly personalistic and focused on significantly concentrating power within the country’s decision-making system. There is very serious politicization of China’s policymaking system. There’s greater emphasis on regime security, which leads to securitization of non-security issues. There is much greater secrecy. The Chinese experts, civilian and military, are increasingly marginalized in China’s internal policy deliberation and policy thinking. All of those internal changes raise increasing questions about the quality of China’s strategic decision-making, and there is a higher risk of incoherence in China’s policymaking. For example, we see Chinese nuclear forces increasingly talking about winning strategic victories, which seems to indicate that they are embracing nuclear war-fighting doctrine. But it’s hard to tell whether that’s the case, or whether the nuclear force officials are simply trying to show political loyalty to Mr. Xi, who has personally stressed the importance of being generally more prepared for military warfare.
There’s also the impact of greater disinformation. A common misunderstanding in the international community is that China is promoting propaganda. But in fact, the problem goes much deeper, because China itself appears to genuinely believe in the disinformation. Even policy and technical experts do. We see many Chinese biosecurity experts who genuinely believe there is truth to the Russian claim that the US has been using labs in Ukraine and other countries to secretly develop biological weapons. That’s another major challenge.
Similarly, with AI and autonomous weapons, Chinese experts tend to use worst-case-scenario thinking to evaluate what the US is doing when gauging the incorporation of these new technologies into its military capabilities. So, how will both sides avoid developing misunderstandings about each other’s capabilities? Policy is becoming increasingly challenging, especially when some of the technologies like AI — an algorithm — are not visible and are difficult to verify.
My last point is that the overall politicization of China’s domestic decision-making system is also making China much more willing to protect anything related to national security — and less transparent about sharing any information on events related to national security. If there is another pandemic in the future, I very much worry that China will have less incentive to share information than it did during the COVID pandemic.
All of these challenges are evident. I’m happy to talk about what can be done to mitigate them later in the discussion. Thank you.
Matthew: Thank you so much. Over to you, Bill.
Bill: It’s great to be with you all. Thank you for inviting me. I want to preface my remarks by clarifying that I’m a generalist in these matters; my expertise isn’t nuclear-specific. Tong’s expertise far outweighs my own there. I’m going to touch on nuclear risk a bit, but also on bio and AI risk. I should also add a final caveat, which is that I am famous — or infamous — at my think tank for being perhaps the biggest China hawk. But I think I’m right, so let’s see how we feel at the end of this conversation.
I’ll start with the nuclear threat because it influences the latter two risks. We just got a great overview and I don’t have too much to add. To state the obvious, I worry a lot about nuclear issues with China, but also more broadly about what’s happening there and with the US’s relationship with China. We are at a moment where the memory of what nuclear weapons can do has faded considerably in the general world population, which is, I think, an underappreciated fact. The degree to which the Cuban Missile Crisis was instrumental in establishing a lot of our nuclear controls, and a lot of the sobriety in political leadership, can be easy to forget. And a lot of that sobriety has waned over the last few decades, and particularly since the Cold War. The result is that nuclear controls are breaking down. We’ve seen that with the US-Russian Anti-Ballistic Missile Treaty of 2002, the Intermediate-Range Nuclear Forces Treaty of 2019, and the Open Skies Treaty. What controls remain don’t look great. And the ecosystem as a whole just doesn’t look promising.
Maybe most concerning is that the strategies around nuclear use around the world are more and more unhinged. People point to different beginnings for this particular trend, often centered on India and Pakistan. Pakistan has pioneered a strategy of overcoming a disadvantage in conventional weapons by using nukes as a backstop. The idea is to say they can fight on a conventional level, but if it gets too bad, they will brandish nuclear capabilities and threaten to use them. We’ve seen Russia do its own version of this with Ukraine. Some people would say this goes back to the NATO strategy early in the Cold War of overcoming a conventional deficit vis-a-vis Russia by threatening nuclear use on a tactical or smaller level. The idea is that basically, if it looks like a country or group of allies is going to lose a conventional war, they will make the war look so scary that people will stop — and they’ll make it scary by threatening nukes, which, obviously, is a scary prospect for good reason.
China has not subscribed to this strategy. But the strategy introduces a lot of instability in a world where nuclear use feels more possible and when there’s a growing understanding that this strategy is employable. My view here might provide a different rendering of some of the issues that Tong has laid out. China is rapidly expanding its arsenal. The US was already freaking out about that, and then the expansion significantly passed how much we thought it was expanding. So, it’s pretty dramatic. We like to separate out issues like nuclear, climate, and a few others that affect all of humanity. And if you talk to US State Department folks, they tend to express a lot of frustration that all such issues get subordinated to broader geopolitical goals when dealing with China. Nuclear is no exception. And then diplomacy with China involving other risky areas of tech, which I’ll turn to in a minute, have more or less fallen off a cliff. So in debris and those types of issues, there’s a lot of frustration that talks have more or less broken down. There’s a lot of distrust. We obviously all know what happened with bio risks. There’s a lot of concern that China refuses to answer the crisis phone when things happen. We saw that with the balloon incident, and with other incidents as well.
As a final point on the nuclear issue, I’d like to reiterate what Tong mentioned about the change in political culture in China that we’ve seen. It’s more secretive, it’s more ossified, it’s more personalistic. I also think it is a return to a political ethos that prizes ruthlessness, domestically and internationally. And when you talk about nuclear weapons, ruthlessness is not a political virtue that we like to see front and center. That’s my take.
I understand that this crowd already has a lot of concern over AI — I think rightfully so. I’m probably less bullish on some of the big concerns than a lot of people in this crowd have. Nonetheless, my own theory and main message on AI is that every AI risk is significantly more acute in China. I can go into the specific reasons why, but at least in 2022, according to an Ipsos survey, America was among the least optimistic nations in the world on the effects of AI. We already have a lot of misgivings, and a certain paranoia or fear of failure, which a lot of the safety literature will tell you is essential to avoid catastrophes. China, on the other hand, was the most optimistic society in the world on AI. Not only that, but historically, Chinese AI developers have really taken pride in the fact that their government is willing to stomach more risk and collateral damage on tech issues.
I think one of the main reasons why perceptions of AI in the US and China are so disparate is due to internal Chinese politics. I’ll give three predominant factors. One is disaster amnesia. When very bad tech catastrophes happen in China, they are very quickly repressed and covered up. We routinely see falsified statistics and sometimes denial that tragedies have happened at all. The most extreme example is that, by conservative estimates, 200,000 individuals were killed by radiation poisoning from Chinese nuclear tests in the 1990s. The Chinese government still will not acknowledge this. There are a lot of other instances as well, including the Chinese milk formula scandal and the Wenzhou train crash.
A second thing that exacerbates the country’s amnesia is large-scale state ambitions that tend to invite disaster. China has explicitly stated that it plans to overtake the US in AI by 2030 and to become the world leader in biotech by 2035. China is really invested in these sorts of ambitious goals on time horizons that require a lot of quick movement. We’ve seen that regularly — and then catastrophe hits, the most severe example obviously being the Great Leap Forward, where a desire to leapfrog Western nations in steel production led to the largest famine in human history. Obviously, we’re not in Mao’s time, so a lot of things are different. But we’ve also seen repeats of this kind of dynamic of really ambitious, state-driven goals resulting in calamity in other areas. For example, there has been the one-child policy with the demographic imbalance that now plagues the country. A smaller case would be the satellite launch industry, which also ended in a lot of tragedy in the 1990s. The Belt and Road Initiative today has some features of this, where there are some large-scale projects that are literally falling apart. But more to the point, it has ratcheted up financial instability in developing countries around the world to a much greater degree than most people expected.
And then the third, and maybe the biggest, reason why we should be worried is authoritarian crisis mismanagement. Again, the most obvious example is COVID. But there are a lot more. I think one of the best ones is HIV in the 1990s. There was an outbreak for several years that infected at least a million individuals. And there were just layers of government cover ups and obfuscation. The problem intensified as a result. Yet the party leaders who oversaw the situation ended up being promoted, even after the effects of the cover up were known. So, the system really incentivizes avoiding early intervention that allows disasters to snowball into catastrophes. SARS was another example. And what makes COVID especially egregious is that after SARS, the government spent $850 million to create public health reporting mechanisms that would supposedly overcome this autocratic instinct to cover up disasters and let them snowball. And despite that $850 initiative, COVID ended up playing out in almost exactly the same way. The problem is pretty baked into the system.
If AI, or bio tools or labs, went haywire in China, I think the odds of early intervention to avert catastrophe would be low. With bio, I think the state has been extremely ambitious. It is harvesting genetic data on an industrial scale, both in China and around the world. But the safety record is pretty dramatic. So there’s obviously the COVID case, which is politicized, but you can probably intuit what I think happened there. There are other instances as well. A Beijing bio lab had four SARS leaks — two in the same year. And the largest known lab leak so far was Lanzhou, which leaked aerosolized Brucella, leading to more than 10,000 individuals contracting the disease. Between 1975 and 2016, we have recorded between 60 and 70 accidental lab incidents that resulted in exposure to a highly contagious pathogen. Disproportionately those are from the US and Europe, but that is most likely because those are the places that will report such incidents. The fact that some — and arguably all — of the most egregious cases have come from China suggests that there’s a lot more going on there that we should be worried about. We only hear about the ones that are so bad that they can’t be covered up.
On AI, it’s more preliminary and prospective. But I did have a friend who was speaking to a machine learning professor at Tsinghua University, which is my friend’s and my alma mater and one of the preeminent institutions in China in the area. And the professor said something about how they view AI as the most transformative technology since nuclear weapons, and therefore, they would like to be “the first to detonate.” They weren’t trying to be provocative, but I think it does communicate the immense amount of techno optimism — which goes hand-in-hand with little appreciation for the risks — in China. It stems from this disaster amnesia, China’s really big goals, and its development history. My fear is that if you mix all of that together, you have a catastrophe waiting to happen.
How to address different norms around sharing information
Matthew: Thank you both for your opening remarks. We have a lot of potential content to cover. The first question is about informational asymmetry — the differences between open and closed societies and the ways in which the US and Chinese governments process information. This seems like a potential source of conflict. For instance, China may be concerned that increased transparency will be a disproportionate disadvantage when it comes to nuclear weapons. How do we address this issue of informational asymmetry? How do we get the two governments to have conversations and reduce risk when there’s this big difference in the environment and norms around information in each society?
Tong: Basically, there are both tactical and strategic measures that are necessary. Tactical measures are, of course, easier. They involve promoting expert-level exchanges to make sure that at least Chinese nuclear, AI, and biosecurity policy experts are better informed of the potential risks and don’t develop serious misunderstandings about American policy or capabilities. That’s already very hard to do because of the tightening of security rules. In China, experts face greater difficulty traveling internationally and meeting with foreigners. But still, I think more frequent, substantive exchanges among experts are the most straightforward way to produce a positive near-term effect. There are many historical examples of false warnings and technical or operational errors leading to catastrophe. US experts can certainly share those with their Chinese counterparts. And once China develops a greater appreciation of the risks, it would be more incentivized to adopt unilateral measures to prevent those risks (even if it still rejects quantitative measures).
However, and more importantly, those tactical measures can only work to a certain extent, because catastrophes are more likely to happen in a closed, authoritarian system like China’s. We are facing a growing information perception gap between China and Western societies. And nuclear policy researchers like me are aware that nuclear deterrence is not perfect; it is error-prone. It is supposed to be a temporary solution that gives us time to work on our underlying systemic disagreements and problems. But right now, we are not really thinking about how to address those underlying problems and help China appreciate that the information restriction is fostering misunderstanding because there is a feedback loop misinforming the Chinese leaders themselves. That’s a serious and detrimental impact of information restriction. We need to think about measures to address these underlying challenges.
Bill: I think that was a great overview. I have little to add, except maybe to say that while there is incremental progress from dialogues — and some receptiveness on the Chinese side to us taking safety overtures in autonomous weapons or what have you, which they’re matching — on the whole, I’m pessimistic that communications are working. And there can be a temptation to think that if we just got the communication piece right, things would be better. That thinking is most absurd in calls for more crisis hotlines. We already have two, both of which are defunct. Adding a third won’t help.
This may run counter to the fear of the Chinese misinterpreting our intentions. But if we are living in an age of defunct diplomacy or discourse on these issues, I think the top priority should be ensuring that deterrence is really, really strong. Right now, that is how to avoid situations where there’s uncertainty about what might happen.
Problems resulting from risk compensation
Matthew: My next question is how do you think about problems resulting from risk compensation? You’ve both hit on this theme in different ways. One potential outcome of risk compensation is not responding to crisis communication hotlines. Another could be perceiving certain risk-reduction measures as guardrails for dangerous behavior. For example, China does not want US communications technology in the South China Sea. There are examples like this where, on the one hand, something seems as if it would directly reduce risk. But on the other hand, there are incentives from a competitive standpoint to either not adopt these risk-reduction measures, or if they occur, to engage in different behavior from what one would have engaged in otherwise because of the increased stability the measures bring. What are some ways to address this kind of strategic behavior? And are there areas where you’re not really worried about it?
Bill: That’s a really hard question. I am worried about it. I think we simply have reached a point in our relationship where, with quite a lot of the crisis communications, we psych ourselves out. There are endless psych-out cycles on both sides. It seems counterproductive. The only thing that seems to change the game is just facts on the ground in terms of power. And, unfortunately, for me, that’s just where we are. But I fear that’s not a very sophisticated or inspiring answer.
Tong: Well, one major challenge is the two sides fundamentally disagree about who is provoking the crisis. They need to not only talk about crisis communication mechanisms, but also have a candid conversation about which behaviors are destabilizing and irresponsible. They need to establish common views on what those behaviors are. I think that because the Chinese expert community is more exposed to international thinking, they are more accepting of many of the crisis management measures. It is the Chinese political leaders who are more interested in deliberately using risk manipulation to achieve broader security policy goals. It’s hard to actually talk with Chinese senior leaders in a substantive way. We can only talk with the more open-minded Chinese experts.
I think that’s worth doing. We can share the urgency of some of the crisis prevention and management measures. Although Chinese experts are less appreciative of certain risks involving new technologies and new escalation pathways, if we demonstrate those to them in a stronger manner, that could help cultivate consensus among the expert community over time. Then, they would be able to indirectly influence the thinking of the political leaders. That’s the best we can do at this moment.
Overhyped and underhyped risks
Matthew: Thank you. This question round will be rapid-fire. I’ll go through several different potential threats and ask you whether you consider each threat to be overhyped or underhyped relative to public perceptions.
First, do you think AI and nuclear command-and-control is an overrated or underrated threat?
Bill: In public discourse, it’s underrated. In expert discourse, I think it’s overrated.
Tong: I think it’s overrated in the sense that both American and Chinese policymakers must be very aware of the potential risks.
Matthew: What about AI-enabled targeting or strategic warning systems that enable one to find nuclear forces or get early warning of potential strategic intention? I don’t mean early warning in the sense of detecting a launch, just that there’s movement.
Bill: So far, I think that’s an overrated threat.
Tong: It’s hard to tell. There’s just not much discussion on that issue.
Matthew: The next threat is AI-enabled bio weapons or just bio weapons in general — would you say that’s over- or underrated?
Bill: Bio weapons, as opposed to biological leaks? I guess the potential is there, and I’m really worried about that, but it’s not imminent. But it could be crazy.
Tong: I think the risk is underrated, at least in the Chinese case. China tends to overestimate what the US is doing regarding using new biotechnology to develop military capabilities.
Matthew: Next on the list is hypersonic missiles: overhyped or underhyped?
Bill: Among the public, I think underhyped.
Tong: I think it’s both. It’s overrated by the public in terms of the perception about the revolutionary impact. But it’s underrated in that many policy experts still lack full appreciation of the potential escalation risks.
Matthew: What about missile defenses, both in the US and China, in terms of strategic stability and things of that nature?
Tong: I think it’s overrated. Countries tend to exaggerate the impact of missile defense on strategic stability.
Bill: I don’t know enough on that topic to have an opinion.
Matthew: The last one I’ll ask is more about different sources of threats. Some people focus on arms racing, some people focus on nuclear blackmail. Which are you more worried about?
Tong: For me, it’s interrelated. Arms racing leads to riskier nuclear postures and makes nuclear use in a crisis more likely to happen.
Bill: Yeah, I think I feel similarly. I mostly worry about how this could push up inadvertent escalation cycles.
AI governance structures, the Chinese defense minister’s dismissal, and the US’s semiconductor export policies
Matthew: All right, thank you. How optimistic are you about AI governance structures that include both the US and China, compared to structures that are more focused on coalition-building — for example, the US structuring cooperation among its allies?
Bill: I’m definitely more optimistic about coalition-building among allies having appreciable effects. I think that there is momentum and seemingly some receptivity to set some guardrails between the US, China, and the rest of the world. But I think setting agreements is way overhyped. Following those agreements is different, and something I’m less less bullish on.
Matthew: What’s your read on the recent dismissal of China’s defense minister?
Tong: We just don’t know the real reason. Corruption is the most likely theory. The fact that we don’t know basically shows that there’s increasing opacity in Chinese high-level politics. If corruption is the reason, that’s indicative of a system that is increasingly secretive and eliminates internal checks and balances, which not only fosters greater room for corruption, but more importantly makes China less able to make coherent and consistent policies. I think that’s a greater concern than simply corruption.
Bill: There’s that great quote from Churchill that I can’t recall verbatim, but it’s something about how communist leadership struggles are like bulldogs fighting under a rug. You don’t know what’s happened, but you see the bones fly out. And we’ve seen a lot of bones fly out in recent months. I don’t know what’s going on under the rug. But the frequency and the level of dismissals is really conspicuous.
Matthew: What is the impact of the US’s semiconductor export control strategy and policies in recent years? Are they effective? Is there a backfire risk? How do you view the balance of the benefits and costs of these sorts of tech control policies?
Bill: I think they are working. And I think they have a runway to keep working, but that doesn’t mean they will always work. Whether or not it will backfire depends on three very opaque variables. One is the speed of AI progress. Second is the speed of China’s indigenization efforts. And third is if there are any game-changing algorithmic advancements that will change the needs around using existing chips or different types of chips to get to similar ends. All three of those things are very hard to assess.
Tong: If I may add a few words to an excellent answer, I think the US policy lacks clarity about what it wants China to do to avoid tighter export control rules. Yes, the US is unhappy about China’s civil-military fusion, but what specific measures should China take to make the US less concerned? That’s unclear. It’s hard for China to know what it should do. And although the US policy tries to limit China’s military development rather than undermine its civil, technological development, I think the US fails to draw a line between those two objectives. Therefore, to China, it appears that the US is undermining China’s overall growth.
Some US strategists think that should be the goal: to comprehensively undermine China’s technological competitiveness and its economic development. But if that’s the goal, then the overall impact on US-China relations is totally unclear. Maybe export controls can slow down and undermine China’s competitiveness. But China already has so many internal challenges, as Bill explained. The secrecy of the authoritarian system creates profound internal challenges. Do we really need to create extra difficulty for China that will thwart its development? I wonder if the US really needs tight export control rules, because China is already facing so many internal challenges.
More broadly, the goal should be to promote a more open and liberal Chinese society that is friendly to Western countries and the international community. But if you are taking measures that are going to alienate the Chinese civil society, I think you will probably have a counterproductive impact.
Ideas for calibrating how the US cooperates and/or competes with China
Matthew: Thank you. We’re running short on time, so I’m going to ask one more question. Are there areas where US policy toward China should be more cooperative, and competition less intense? Where should competition be more intense? And where should it be more conditional, so as to provide better incentives? It’s a bit of a complicated question.
Bill: I can share the main things I would do. First, I would increase our deterrence in Taiwan as much as possible. That way, there would be no ambiguity that could spiral. I guess that is more on the competition side. Also, I would turn up recruitment of AI and bioengineers from China to 11, but try to retain them in the United States.
One area I see as under conceptualized is bio risk. I don’t know whether we need more cooperation or more competition, but we need to think about it more. It hasn’t been explored enough, I think, in either direction.
Tong: I think the US should have tailored, nuanced policies with clear and justifiable goals, and focus on the overall objective of fostering a more liberal and open Chinese system society. The US should consider not only what benefits the US in regard to our long-term competitiveness, but how to create a collaborative and open Chinese society that fundamentally reduces bilateral risks and makes peace and stability more sustainable.
Matthew: Thank you so much. With that, we’ll close out this session.