On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it.
There is a research institute in China called the Beijing Academy of Artificial Intelligence. In May 2019 they published a document called “The Beijing Artificial Intelligence Principles” that included the following:
Harmony and Cooperation: Cooperation should be actively developed to establish an interdisciplinary, cross-domain, cross-sectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of “Optimizing Symbiosis”.
.
Long-term Planning: Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future.
(This is just something that I happened to stumble upon when it was published; there may be many people in China at relevant positions that take x-risks from AI seriously.)
There is a research institute in China called the Beijing Academy of Artificial Intelligence. In May 2019 they published a document called “The Beijing Artificial Intelligence Principles” that included the following:
.
(This is just something that I happened to stumble upon when it was published; there may be many people in China at relevant positions that take x-risks from AI seriously.)