China more or less does not care about AI safety from existential risks. Therefore, slowing their timelines is good, but sacrificing US influence over China is bad.
What evidence do you have that the Chinese government cares less about x-risks from AI than the current US government, let alone whatever government the US will have after 2024? If avoiding existential catastrophes from AI mostly depends on governments’ ability to regulate AI companies, does the US government seem to you better positioned than the Chinese government to establish and enforce such regulations?
Fair point, the answer is unclear and could change. The most important fact IMO is that two of the leading AGI companies in the US, OpenAI and Deepmind, are explicitly concerned with x-risk and have invested seriously in safety. (Not as much as I’d like, but significant investments.) I’d rather those companies reach AGI than others who don’t care about safety. They’re US-based and benefit relative to Chinese companies from US policy that slows China.
Second, while I don’t think Joe Biden thinks or cares about AI x-risk, I do think US policymakers are more likely to be convinced of the importance of AI x-risk. Most of the people arguing for AI risk are English speaking, and I think they’re gaining some traction. Some evidence:
The Catastrophic Risk Management Act introduced by Senators Portman and Peters is clearly longtermist in motivation. From the act: “Not later than 1 year after the date of enactment of this Act, the President, with support from the committee, shall conduct and submit to Congress a detailed assessment of global catastrophic and existential risk.” Several press releases explicitly mentioned risks from advanced AI, though not the alignment problem. This seems indicative of longtermism and EAs gaining traction in DC.
The National Security Commission on AI commissioned by Congress in 2018 did not include x-risk in their report, which is disappointing. That group, led by Eric Schmidt former CEO of Google, has continued their policy advocacy as the Special Competitive Studies Project. They are evidently aware of x-risk concerns, as they cited Holden Karnofsky’s writeup of the most important century hypothesis. Groups like these seem like they could be persuaded of the x-risk hypothesis, and could successfully advocate sensible policy to the US government.
Finally, there are think tanks who explicitly care about AI x-risk. My understanding is that CSET and CNAS are the two leaders, but the strong EA grantmaking system could easily spur more and more successful advocacy.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it. China does seem to have much stronger regulatory skills, and would probably be better at implementing compute controls and other “pivotal acts”. But without a channel to communicate why they should do so, I’m skeptical that they will.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it.
There is a research institute in China called the Beijing Academy of Artificial Intelligence. In May 2019 they published a document called “The Beijing Artificial Intelligence Principles” that included the following:
Harmony and Cooperation: Cooperation should be actively developed to establish an interdisciplinary, cross-domain, cross-sectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of “Optimizing Symbiosis”.
.
Long-term Planning: Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future.
(This is just something that I happened to stumble upon when it was published; there may be many people in China at relevant positions that take x-risks from AI seriously.)
Late response, but may still be of interest: some colleagues and I spent some time surveying the existing literature on China x AI issues and the resource list we produced includes a section on Key actors and their views on AI risks. In general, I’d recommend the Concordia AI Safety newsletter for regular news of Chinese actors commenting on AI safety (and, more or less directly, on related x-risks).
What evidence do you have that the Chinese government cares less about x-risks from AI than the current US government, let alone whatever government the US will have after 2024? If avoiding existential catastrophes from AI mostly depends on governments’ ability to regulate AI companies, does the US government seem to you better positioned than the Chinese government to establish and enforce such regulations?
Fair point, the answer is unclear and could change. The most important fact IMO is that two of the leading AGI companies in the US, OpenAI and Deepmind, are explicitly concerned with x-risk and have invested seriously in safety. (Not as much as I’d like, but significant investments.) I’d rather those companies reach AGI than others who don’t care about safety. They’re US-based and benefit relative to Chinese companies from US policy that slows China.
Second, while I don’t think Joe Biden thinks or cares about AI x-risk, I do think US policymakers are more likely to be convinced of the importance of AI x-risk. Most of the people arguing for AI risk are English speaking, and I think they’re gaining some traction. Some evidence:
The Catastrophic Risk Management Act introduced by Senators Portman and Peters is clearly longtermist in motivation. From the act: “Not later than 1 year after the date of enactment of this Act, the President, with support from the committee, shall conduct and submit to Congress a detailed assessment of global catastrophic and existential risk.” Several press releases explicitly mentioned risks from advanced AI, though not the alignment problem. This seems indicative of longtermism and EAs gaining traction in DC.
https://www.congress.gov/bill/117th-congress/senate-bill/4488/text
https://www.hsgac.senate.gov/media/minority-media/portman-peters-introduce-bipartisan-bill-to-ensure-federal-government-is-prepared-for-catastrophic-risks-to-national-security-
The National Security Commission on AI commissioned by Congress in 2018 did not include x-risk in their report, which is disappointing. That group, led by Eric Schmidt former CEO of Google, has continued their policy advocacy as the Special Competitive Studies Project. They are evidently aware of x-risk concerns, as they cited Holden Karnofsky’s writeup of the most important century hypothesis. Groups like these seem like they could be persuaded of the x-risk hypothesis, and could successfully advocate sensible policy to the US government.
https://www.scsp.ai/reports/mid-decade-challenges-for-national-competitiveness/preface/
Finally, there are think tanks who explicitly care about AI x-risk. My understanding is that CSET and CNAS are the two leaders, but the strong EA grantmaking system could easily spur more and more successful advocacy.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it. China does seem to have much stronger regulatory skills, and would probably be better at implementing compute controls and other “pivotal acts”. But without a channel to communicate why they should do so, I’m skeptical that they will.
There is a research institute in China called the Beijing Academy of Artificial Intelligence. In May 2019 they published a document called “The Beijing Artificial Intelligence Principles” that included the following:
.
(This is just something that I happened to stumble upon when it was published; there may be many people in China at relevant positions that take x-risks from AI seriously.)
Late response, but may still be of interest: some colleagues and I spent some time surveying the existing literature on China x AI issues and the resource list we produced includes a section on Key actors and their views on AI risks. In general, I’d recommend the Concordia AI Safety newsletter for regular news of Chinese actors commenting on AI safety (and, more or less directly, on related x-risks).