Great writeup! I also wrote about the restrictions here, with some good discussion. A few thoughts:
I think this slows China’s AI progress by a few years. Losing Nvidia GPUs alone is a serious hit to ML researchers in China. They are building their own alternatives, for example CodeGeeX is a GPT-sized language model trained entirely on Chinese GPUs. But this makes GPUs more scarce.
It probably also reduces US influence over China and Chinese AI in the future. We’re making them less reliant on us now, meaning we can’t use GPUs as leverage to force safety standards or other kinds of cooperation in the future.
I agree with your concern about cooperation on other existential risks. If we want to work together on climate change or banning research on dangerous pathogens, this hurts us.
China more or less does not care about AI safety from existential risks. Therefore, slowing their timelines is good, but sacrificing US influence over China is bad. It’s unclear how these two balance out. If you have longer timelines, you’d probably prioritize long-term influence.
I think this definitely increases the chances of war with China, because it’s explicitly designed to prepare for a possible war. It’s Tom Cotton’s strategy of economic decoupling to “Beat China”.
From the standard US foreign policy viewpoint, I think this shift is well-warranted. Cooperation with China on trade has not given us the soft power we hoped it would. They’re as anti-democratic as ever, still committing human rights abuses and arguably only growing more aggressive. It’s time to move from the carrot to the stick.
From an EA standpoint placing higher value on existential risk and lower value on typical US foreign policy interests, conflict with China definitely looks worse. The US stance looks like it would rather start World War III than allow China to become the top global superpower. I do believe that US democratic values are much better for the world than authoritarianism, and I’m scared of long-term authoritarianism, but solving it with global war doesn’t help.
This is analogous to the question of Ukraine: Do you support a democratic nation attacked by a despot at risk of nuclear war? Avoiding armageddon has to be the top priority, but over the years we’ve been able to pursue our other interests without spiraling into nuclear war.
Important operationalization: Do we defend Taiwan with US troops? Biden says yes. Taiwan is very important to defend (not least for TSMC and its semiconductors), but I think it’s probably better to lose Taiwan than raise the chances of nuclear war by 0.1%.
China more or less does not care about AI safety from existential risks. Therefore, slowing their timelines is good, but sacrificing US influence over China is bad.
What evidence do you have that the Chinese government cares less about x-risks from AI than the current US government, let alone whatever government the US will have after 2024? If avoiding existential catastrophes from AI mostly depends on governments’ ability to regulate AI companies, does the US government seem to you better positioned than the Chinese government to establish and enforce such regulations?
Fair point, the answer is unclear and could change. The most important fact IMO is that two of the leading AGI companies in the US, OpenAI and Deepmind, are explicitly concerned with x-risk and have invested seriously in safety. (Not as much as I’d like, but significant investments.) I’d rather those companies reach AGI than others who don’t care about safety. They’re US-based and benefit relative to Chinese companies from US policy that slows China.
Second, while I don’t think Joe Biden thinks or cares about AI x-risk, I do think US policymakers are more likely to be convinced of the importance of AI x-risk. Most of the people arguing for AI risk are English speaking, and I think they’re gaining some traction. Some evidence:
The Catastrophic Risk Management Act introduced by Senators Portman and Peters is clearly longtermist in motivation. From the act: “Not later than 1 year after the date of enactment of this Act, the President, with support from the committee, shall conduct and submit to Congress a detailed assessment of global catastrophic and existential risk.” Several press releases explicitly mentioned risks from advanced AI, though not the alignment problem. This seems indicative of longtermism and EAs gaining traction in DC.
The National Security Commission on AI commissioned by Congress in 2018 did not include x-risk in their report, which is disappointing. That group, led by Eric Schmidt former CEO of Google, has continued their policy advocacy as the Special Competitive Studies Project. They are evidently aware of x-risk concerns, as they cited Holden Karnofsky’s writeup of the most important century hypothesis. Groups like these seem like they could be persuaded of the x-risk hypothesis, and could successfully advocate sensible policy to the US government.
Finally, there are think tanks who explicitly care about AI x-risk. My understanding is that CSET and CNAS are the two leaders, but the strong EA grantmaking system could easily spur more and more successful advocacy.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it. China does seem to have much stronger regulatory skills, and would probably be better at implementing compute controls and other “pivotal acts”. But without a channel to communicate why they should do so, I’m skeptical that they will.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it.
There is a research institute in China called the Beijing Academy of Artificial Intelligence. In May 2019 they published a document called “The Beijing Artificial Intelligence Principles” that included the following:
Harmony and Cooperation: Cooperation should be actively developed to establish an interdisciplinary, cross-domain, cross-sectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of “Optimizing Symbiosis”.
.
Long-term Planning: Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future.
(This is just something that I happened to stumble upon when it was published; there may be many people in China at relevant positions that take x-risks from AI seriously.)
Late response, but may still be of interest: some colleagues and I spent some time surveying the existing literature on China x AI issues and the resource list we produced includes a section on Key actors and their views on AI risks. In general, I’d recommend the Concordia AI Safety newsletter for regular news of Chinese actors commenting on AI safety (and, more or less directly, on related x-risks).
Great writeup! I also wrote about the restrictions here, with some good discussion. A few thoughts:
I think this slows China’s AI progress by a few years. Losing Nvidia GPUs alone is a serious hit to ML researchers in China. They are building their own alternatives, for example CodeGeeX is a GPT-sized language model trained entirely on Chinese GPUs. But this makes GPUs more scarce.
It probably also reduces US influence over China and Chinese AI in the future. We’re making them less reliant on us now, meaning we can’t use GPUs as leverage to force safety standards or other kinds of cooperation in the future.
I agree with your concern about cooperation on other existential risks. If we want to work together on climate change or banning research on dangerous pathogens, this hurts us.
China more or less does not care about AI safety from existential risks. Therefore, slowing their timelines is good, but sacrificing US influence over China is bad. It’s unclear how these two balance out. If you have longer timelines, you’d probably prioritize long-term influence.
I think this definitely increases the chances of war with China, because it’s explicitly designed to prepare for a possible war. It’s Tom Cotton’s strategy of economic decoupling to “Beat China”.
From the standard US foreign policy viewpoint, I think this shift is well-warranted. Cooperation with China on trade has not given us the soft power we hoped it would. They’re as anti-democratic as ever, still committing human rights abuses and arguably only growing more aggressive. It’s time to move from the carrot to the stick.
From an EA standpoint placing higher value on existential risk and lower value on typical US foreign policy interests, conflict with China definitely looks worse. The US stance looks like it would rather start World War III than allow China to become the top global superpower. I do believe that US democratic values are much better for the world than authoritarianism, and I’m scared of long-term authoritarianism, but solving it with global war doesn’t help.
This is analogous to the question of Ukraine: Do you support a democratic nation attacked by a despot at risk of nuclear war? Avoiding armageddon has to be the top priority, but over the years we’ve been able to pursue our other interests without spiraling into nuclear war.
Important operationalization: Do we defend Taiwan with US troops? Biden says yes. Taiwan is very important to defend (not least for TSMC and its semiconductors), but I think it’s probably better to lose Taiwan than raise the chances of nuclear war by 0.1%.
It used Huawei Ascend 910 AI Processors, which was fabbed by TSMC, which will no longer be allowed to make such chips for China.
What evidence do you have that the Chinese government cares less about x-risks from AI than the current US government, let alone whatever government the US will have after 2024? If avoiding existential catastrophes from AI mostly depends on governments’ ability to regulate AI companies, does the US government seem to you better positioned than the Chinese government to establish and enforce such regulations?
Fair point, the answer is unclear and could change. The most important fact IMO is that two of the leading AGI companies in the US, OpenAI and Deepmind, are explicitly concerned with x-risk and have invested seriously in safety. (Not as much as I’d like, but significant investments.) I’d rather those companies reach AGI than others who don’t care about safety. They’re US-based and benefit relative to Chinese companies from US policy that slows China.
Second, while I don’t think Joe Biden thinks or cares about AI x-risk, I do think US policymakers are more likely to be convinced of the importance of AI x-risk. Most of the people arguing for AI risk are English speaking, and I think they’re gaining some traction. Some evidence:
The Catastrophic Risk Management Act introduced by Senators Portman and Peters is clearly longtermist in motivation. From the act: “Not later than 1 year after the date of enactment of this Act, the President, with support from the committee, shall conduct and submit to Congress a detailed assessment of global catastrophic and existential risk.” Several press releases explicitly mentioned risks from advanced AI, though not the alignment problem. This seems indicative of longtermism and EAs gaining traction in DC.
https://www.congress.gov/bill/117th-congress/senate-bill/4488/text
https://www.hsgac.senate.gov/media/minority-media/portman-peters-introduce-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-to-national-security-
The National Security Commission on AI commissioned by Congress in 2018 did not include x-risk in their report, which is disappointing. That group, led by Eric Schmidt former CEO of Google, has continued their policy advocacy as the Special Competitive Studies Project. They are evidently aware of x-risk concerns, as they cited Holden Karnofsky’s writeup of the most important century hypothesis. Groups like these seem like they could be persuaded of the x-risk hypothesis, and could successfully advocate sensible policy to the US government.
https://www.scsp.ai/reports/mid-decade-challenges-for-national-competitiveness/preface/
Finally, there are think tanks who explicitly care about AI x-risk. My understanding is that CSET and CNAS are the two leaders, but the strong EA grantmaking system could easily spur more and more successful advocacy.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it. China does seem to have much stronger regulatory skills, and would probably be better at implementing compute controls and other “pivotal acts”. But without a channel to communicate why they should do so, I’m skeptical that they will.
There is a research institute in China called the Beijing Academy of Artificial Intelligence. In May 2019 they published a document called “The Beijing Artificial Intelligence Principles” that included the following:
.
(This is just something that I happened to stumble upon when it was published; there may be many people in China at relevant positions that take x-risks from AI seriously.)
Late response, but may still be of interest: some colleagues and I spent some time surveying the existing literature on China x AI issues and the resource list we produced includes a section on Key actors and their views on AI risks. In general, I’d recommend the Concordia AI Safety newsletter for regular news of Chinese actors commenting on AI safety (and, more or less directly, on related x-risks).
Great podcast on it from Jordan Schnieder, the document itself, and the press release.