Brian Tse: Sino-Western Cooperation in AI Safety

In­ter­na­tional co­op­er­a­tion is es­sen­tial if we want to cap­ture the benefits of ad­vanced AI while min­i­miz­ing risk. Brian Tse, a policy af­fili­ate at the Fu­ture of Hu­man­ity In­sti­tute, dis­cusses con­crete op­por­tu­ni­ties for co­or­di­na­tion be­tween China and the West, as well as how China’s gov­ern­ment and tech­nol­ogy in­dus­try think about differ­ent forms of AI risk.

A tran­script of Brian’s talk, which we have ed­ited lightly for clar­ity, is be­low. You can also watch it on YouTube or read it on effec­tivealtru­ism.org.

The Talk

It has been seven decades since a nu­clear weapon has been deto­nated.

Tse1

For al­most four decades, par­ents ev­ery­where have not needed to worry about their chil­dren dy­ing from smal­l­pox.

TSe2

The ozone layer, far from be­ing de­pleted to the ex­tent once feared, is ex­pected to re­cover in three decades.

Tse3

Th­ese events — or non-events — are among hu­man­ity’s great­est achieve­ments. They would not have oc­curred with­out co­op­er­a­tion among a mul­ti­tude of coun­tries. This serves as a re­minder that in­ter­na­tional co­op­er­a­tion can benefit ev­ery coun­try and per­son.

To­gether, we can achieve even more. In the next few decades, AI is poised to be one of the most trans­for­ma­tive tech­nolo­gies. In the Chi­nese lan­guage, there is a word, “wēijī,” which is com­posed of two char­ac­ters: one mean­ing dan­ger and the other op­por­tu­nity.

Tse5

Both char­ac­ters are pre­sent at this crit­i­cal junc­ture. With AI, we must seek to min­i­mize dan­gers and cap­ture the up­sides. En­sur­ing that there is ro­bust global co­or­di­na­tion be­tween stake­hold­ers around the world, es­pe­cially those in China and the West, is crit­i­cal in this en­deavor.

So far, the idea of na­tions com­pet­ing for tech­nolog­i­cal and mil­i­tarist supremacy has dom­i­nated the pub­lic nar­ra­tive.

Tse6

When peo­ple talk about China and AI, they always in­voke the coun­try’s am­bi­tion to be­come the world leader in AI by 2030. In con­trast, there is very lit­tle at­ten­tion paid to China’s call for in­ter­na­tional col­lab­o­ra­tion in se­cu­rity, ethics, and gov­er­nance of AI, which are ar­eas of mu­tual in­ter­est. I be­lieve it is a mis­take to think that we must have ei­ther in­ter­na­tional co­op­er­a­tion or in­ter­na­tional com­pe­ti­tion. To­day, some be­lieve that China and the U.S. are best de­scribed as strate­gic ad­ver­saries.

I be­lieve we must de­liber­ately use new con­cepts and terms that cap­ture the two coun­tries’ ur­gent need to co­op­er­ate — not just their drive to com­pete.

Tse7

Joseph Nye, well-known for coin­ing the phrase “soft power,” has sug­gested that we use “co­op­er­a­tive ri­valry” to de­scribe the re­la­tion­ship. Gra­ham Alli­son, the au­thor of Destined For War, has pro­posed the word “coop­er­ti­tion,” al­low­ing for the si­mul­ta­neous co­ex­is­tence of com­pe­ti­tion and co­op­er­a­tion.

In the rest of my talk, I’m go­ing to cover three ar­eas of AI risk that have the po­ten­tial for global co­or­di­na­tion: ac­ci­dents, mi­suse, and the race to de­velop AI.

Tse8

For each of these risks, I will talk about their im­por­tance and fea­si­bil­ity for co­or­di­na­tion. I will also make some recom­men­da­tions.

The risk of AI accidents

As the de­ploy­ment of AI sys­tems has be­come more com­mon­place, the num­ber of AI-re­lated ac­ci­dents has in­creased. For ex­am­ple, on May 6, 2010, the Dow Jones In­dus­trial Aver­age ex­pe­rienced a sud­den crash of $1 trillion known as the “Flash Crash.”

Tse9

It was partly caused by the use of high-fre­quency trad­ing al­gorithms. The im­pact im­me­di­ately spread to other fi­nan­cial mar­kets around the world.

As the world be­comes in­creas­ingly in­ter­de­pen­dent, as with fi­nan­cial mar­kets, lo­cal events have global con­se­quences that de­mand global solu­tions. The par­ti­ci­pa­tion of [the Chi­nese tech­nol­ogy com­pany] Baidu in the Part­ner­ship on AI is an en­courag­ing case study of global col­lab­o­ra­tion.

Tse10

In a press re­lease last year, Baidu said that the safety and re­li­a­bil­ity of AI sys­tems is crit­i­cal to their mis­sion and was a ma­jor mo­ti­va­tion for them to join the con­sor­tium. The [par­ti­ci­pat­ing] com­pa­nies think au­tonomous ve­hi­cle safety is an is­sue of par­tic­u­lar im­por­tance.

China and the U.S. also seem to be co­or­di­nat­ing on nu­clear se­cu­rity. One ex­am­ple is the Cen­ter of Ex­cel­lence on Nu­clear Se­cu­rity in Beijing, which is by far the most ex­ten­sive nu­clear pro­gram to re­ceive di­rect fund­ing from both the U.S. and Chi­nese gov­ern­ments.

Tse11

It fo­cuses on build­ing a ro­bust nu­clear se­cu­rity ar­chi­tec­ture for the com­mon good. A vi­tal fea­ture of this part­ner­ship is an in­tense fo­cus on ex­chang­ing tech­ni­cal in­for­ma­tion, as well as re­duc­ing the risk of ac­ci­dents.

It is note­wor­thy that, so far, China has em­pha­sized the need to en­sure the safety and re­li­a­bil­ity of AI sys­tems. In par­tic­u­lar, the Beijing AI Prin­ci­ples and the Ten­cent Re­search In­sti­tute have high­lighted the risks of AGI sys­tems.

With our cur­rent un­der­stand­ing of AI-re­lated ac­ci­dents, I be­lieve Chi­nese and in­ter­na­tional stake­hold­ers can col­lab­o­rate in the fol­low­ing ways:

Tse13

1. Re­searchers can at­tend the in­creas­ingly pop­u­lar AI safety work­shops at some of the ma­jor ma­chine learn­ing con­fer­ences.

2. Labs and re­searchers can mea­sure and bench­mark the safety prop­er­ties of re­in­forc­ing learn­ing agents based on efforts by or­ga­ni­za­tions and safety groups like that of Deep­Mind.

3. In­ter­na­tional bod­ies, such as ISO, can con­tinue their efforts to set tech­ni­cal stan­dards, es­pe­cially around the re­li­a­bil­ity of ma­chine learn­ing sys­tems.

4. Lastly, al­li­ances such as the Part­ner­ship on AI can fa­cil­i­tate dis­cus­sions on best prac­tices (for ex­am­ple, through [the Part­ner­ship’s] Safety-Crit­i­cal AI Work­ing Group).

The risk of AI misuse

Even if we can miti­gate the un­in­tended ac­ci­dents of AI sys­tems, there is still a pos­si­bil­ity that they’ll be mi­sused.

Tse14

For ex­am­ple, ear­lier this year, OpenAI de­cided not to re­lease the train­ing model of GPT-2, which [can gen­er­ate lan­guage on its own], due to con­cerns that it might be mi­sused to im­per­son­ate peo­ple, cre­ate mis­lead­ing news ar­ti­cles, or trick vic­tims into re­veal­ing their per­sonal in­for­ma­tion. This re­in­forces the need for global co­or­di­na­tion; mal­i­cious ac­tors from any­where could have gained ac­cess to the tech­nol­ogy be­hind GPT-2 and de­ployed it in other parts of the world.

In the field of cy­ber­se­cu­rity, there was a rele­vant case study of the global re­sponse to se­cu­rity in­ci­dents.

Tse15

In 1989, one of the first com­puter worms at­tacked a ma­jor Amer­i­can com­pany. The in­ci­dent prompted the cre­ation of the in­ter­na­tional body FIRST to fa­cil­i­tate in­for­ma­tion-shar­ing and en­able more effec­tive re­sponses to fu­ture se­cu­rity in­ci­dents. Since then, FIRST has been one of the ma­jor in­sti­tu­tions in the field. It cur­rently cur­rently lists ten Amer­i­can and eight Chi­nese mem­bers, in­clud­ing com­pa­nies and pub­lic in­sti­tu­tions.

Another source of op­ti­mism is the grow­ing re­search field of [ad­ver­sar­ial images]. Th­ese are small in­put sam­ples that have been mod­er­ated slightly to cause ma­chine learn­ing clas­sifiers to mis­clas­sify them [e.g., mis­take a toy tur­tle for a gun].

Tse16

This is­sue is highly con­cern­ing, be­cause [ad­ver­sar­ial images] could be used to at­tack a ma­chine learn­ing sys­tem with­out the at­tacker hav­ing ac­cess to the un­der­ly­ing model.

For­tu­nately, many of the lead­ing AI labs around the world are already work­ing hard on this prob­lem. For ex­am­ple, Google Brain or­ga­nized a com­pe­ti­tion on this re­search topic, and the team from China’s Ts­inghua Univer­sity won first place in both the “at­tack” and “defense” tracks of the com­pe­ti­tion.

Many of the Chi­nese AI eth­i­cal prin­ci­ples also cover con­cerns re­lated to the mi­suse of AI . One promis­ing start­ing point of co­or­di­na­tion be­tween Chi­nese and for­eign stake­hold­ers, es­pe­cially the AI labs, in­volves pub­li­ca­tion norms.

Tse18

Fol­low­ing the [con­tro­versy around] OpenAI’s GPT-2 model, the Part­ner­ship on AI or­ga­nized a sem­i­nar on the topic of re­search open­ness. There was no im­me­di­ate con­sid­er­a­tion of whether the AI com­mu­nity should re­strict re­search open­ness. How­ever, they did agree that if the AI com­mu­nity moves in that di­rec­tion, re­view pa­ram­e­ters and norms should be stan­dard­ized across the com­mu­nity (pre­sum­ably, on a global level).

The risk of com­pet­i­tively rac­ing to de­velop AI

The third type of risk that I’m go­ing to talk about is the risk from rac­ing to de­velop AI.

Tse19

Un­der com­pet­i­tive pres­sure, AI labs might put aside safety con­cerns in or­der to stay ahead. Uber’s self-driv­ing car crash in 2018 illus­trates this risk.

Tse20

When it hap­pened, com­men­ta­tors ini­tially thought that the brak­ing sys­tem was the culprit. How­ever, fur­ther in­ves­ti­ga­tion showed that the vic­tim was de­tected early enough for the emer­gency brak­ing sys­tem to have worked and pre­vented the crash.

So what hap­pened? It turned out that the en­g­ineers in­ten­tion­ally turned off the emer­gency brak­ing sys­tem be­cause they were afraid that its ex­treme sen­si­tivity would make them look bad rel­a­tive to their com­peti­tors. This type of trade-off be­tween safety and other con­sid­er­a­tions is very con­cern­ing, es­pe­cially if you be­lieve that AI sys­tems will be­come in­creas­ingly pow­er­ful.

This prob­lem is go­ing to be even more heuris­tic in the con­text of in­ter­na­tional se­cu­rity. We should seek to draw les­sons from his­tor­i­cal analogs.

Tse21

For ex­am­ple, the re­port “Tech­nol­ogy Roulette” by Richard Danzig dis­cusses the norm of “no first use” and its con­tri­bu­tion to sta­bil­ity dur­ing the nu­clear era. Notably, China was the first nu­clear-weapon state to adopt such a policy back in 1964, with vary­ing de­grees of suc­cess. Other na­tions have also used the norm to mod­er­ate the pro­lifer­a­tion and use of var­i­ous mil­i­tary tech­nolo­gies, in­clud­ing blind­ing lasers and offen­sive weapons from outer space.

Now, with AI as a gen­eral-pur­pose tech­nol­ogy, there is a fur­ther challenge: How do you spec­ify and ver­ify that cer­tain AI tech­nolo­gies haven’t been used? On a re­lated note, the Chi­nese nu­clear pos­ture has been de­scribed as a defense-ori­ented one. The ques­tion with AI is: Is it tech­ni­cally fea­si­ble for par­ties to differ­en­tially im­prove defen­sive ca­pa­bil­ities, rather than offen­sive ca­pa­bil­ities, thereby sta­bi­liz­ing the com­pet­i­tive dy­nam­ics? I be­lieve these are still open ques­tions.

Ul­ti­mately, con­struc­tive co­or­di­na­tion de­pends on the com­mon knowl­edge that there is this shared risk of a race to the bot­tom with AI. I’m en­couraged to see in­creas­ing at­ten­tion paid to the prob­lem on both sides of the Pa­cific.

Tse22

For ex­am­ple, Madame Fu Ying, who is chair­per­son of the Na­tional Peo­ple’s Congress For­eign Af­fairs Com­mit­tee in China and an in­fluen­tial diplo­mat, has said that Chi­nese tech­nol­o­gists and poli­cy­mak­ers agree that AI poses a threat to hu­mankind. At the World Peace Fo­rum, she fur­ther em­pha­sized that the Chi­nese be­lieve we should pre­emp­tively co­op­er­ate to pre­vent such a threat.

The Beijing AI Prin­ci­ples, in my view, provide the most sig­nifi­cant con­tri­bu­tion from China re­gard­ing the need to avoid a mal­i­cious AI race. And these prin­ci­ples have gained sup­port from some of the coun­try’s ma­jor aca­demic in­sti­tu­tions and in­dus­try lead­ers. It is my un­der­stand­ing that dis­cus­sions around the Asilo­mar AI Prin­ci­ples, the book Su­per­in­tel­li­gence by Nick Bostrom, and warn­ings from Stephen Hawk­ing and other thinkers have all had a mean­ingful in­fluence on Chi­nese thinkers.

Build­ing com­mon knowl­edge be­tween par­ties is pos­si­ble, as illus­trated by the Thucy­dides Trap.

Tse24

Coined by the scholar Gra­ham Alli­son, The Thucy­dides Trap de­scribes the idea that ri­valry be­tween an es­tab­lished power and a ris­ing power of­ten re­sults in con­flict. This the­sis has cap­tured the at­ten­tion of lead­ers in both Wash­ing­ton, D.C. and Beijing. In 2013, Pres­i­dent Xi Jin­ping told a group of Western vis­i­tors that we should co­op­er­ate to es­cape from the Thucy­dides Trap. In par­allel, I think it is im­por­tant for lead­ers in Sili­con Valley — as well as in Wash­ing­ton, D.C. and Beijing — to rec­og­nize this col­lec­tive prob­lem of a po­ten­tial AI race to the precipice, or what I might call “the Bostrom Trap.”

With this shared un­der­stand­ing, I be­lieve the world can move in sev­eral di­rec­tions. First, there are great ini­ti­a­tives, such as the Asilo­mar AI Prin­ci­ples, which can help many of the sig­na­to­ries [ad­here to] the prin­ci­ple of arms-race avoidance.

Tse25

Ex­pand­ing the breadth and depth of this di­alogue, es­pe­cially be­tween Chi­nese and Western stake­hold­ers, will be crit­i­cal to sta­bi­lize ex­pec­ta­tions and foster mu­tual trust.

Se­cond, labs can ini­ti­ate AI safety re­search col­lab­o­ra­tions across bor­ders.

Tse26

For ex­am­ple, labs could col­lab­o­rate on some of the top­ics laid out in the sem­i­nal pa­per “Con­crete Prob­lems Of AI Safety,” which was it­self a joint effort from mul­ti­ple in­sti­tu­tions.

Lastly — and this is also the most am­bi­tious recom­men­da­tion — lead­ing AI labs could con­sider adopt­ing the poli­cies in the OpenAI Char­ter.

Tse27

The char­ter claims that if a value-al­igned, safety-con­scious pro­ject comes close to build­ing AGI tech­nol­ogy, OpenAI will stop com­pet­ing and start as­sist­ing with that pro­ject. This policy is an in­cred­ible pub­lic com­mit­ment, as well as a con­crete mechanism in try­ing to re­duce these un­de­sir­able [com­pet­i­tive] dy­nam­ics.

Through­out this talk, I have not ad­dressed many of the com­pli­ca­tions in­volved in such an en­deavor. There are con­sid­er­a­tions such as in­dus­trial es­pi­onage, civil/​mil­i­tary fu­sion, and civil liber­ties. I be­lieve each of those top­ics de­serve a nu­anced, bal­anced, and prob­a­bly sep­a­rate dis­cus­sion, given that I will not be able to do proper jus­tice to them in a short pre­sen­ta­tion like this one. That said, on the broader challenge of over­com­ing poli­ti­cal ten­sion, I would like to share a story.

Some be­lieve the Cuban Mis­sile Cri­sis had a one-in-three chance of re­sult­ing in a nu­clear war be­tween the U.S. and the Soviet Union. After the crisis, Pres­i­dent J. F. Kennedy was des­per­ately search­ing for a bet­ter way for­ward.

Tse29

Be­fore he was as­sas­si­nated, in one of his most sig­nifi­cant speeches about in­ter­na­tional or­der, he pro­posed the strate­gic con­cept of a world safe for di­ver­sity. In that world, the U.S. and Soviet Union could com­pete rigor­ously, but only peace­fully, to demon­strate whose value and sys­tem of gov­er­nance might best serve the needs of cit­i­zens. This even­tu­ally evolved into what be­came “dé­tente,” a doc­trine that con­tributed to the eas­ing of ten­sion dur­ing the Cold War.

In China, there is a similar doc­trine, which is “har­mony in di­ver­sity.” [Brian says the word in Man­darin.]

Tse30

The world must learn to co­op­er­ate in tack­ling our com­mon challenges, while ac­cept­ing our differ­ences. If we were able to achieve this dur­ing the Cold War, I be­lieve we should be more hope­ful about our col­lec­tive fu­ture in the 21st cen­tury. Thank you.

Nathan Labenz [Moder­a­tor]: I think the last time I saw you was just un­der a year ago. How do you think things have gone over the last year? If you were an at­ten­tive reader of the New York Times, you would prob­a­bly think things are go­ing very badly in US/​China re­la­tions. Do you think it’s as bad as all that? Or is the news maybe hyp­ing up the situ­a­tion to be worse than it is?

Brian: It is in­deed wor­ry­ing. I will add two points to the dis­cus­sion. One: we’re not only think­ing about co­or­di­na­tion be­tween gov­ern­ments. In my talk, I fo­cused on state-to-state co­op­er­a­tion, but I men­tioned a lot of po­ten­tial ar­eas of col­lab­o­ra­tion be­tween AI labs, re­searchers, academia and civil so­ciety. And I be­lieve that the in­cen­tive and the will­ing­ness to co­op­er­ate be­tween those stake­hold­ers are there. Se­cond, my pre­sen­ta­tion was meant to be for­ward-look­ing and as­pira­tional. I was not look­ing at the cur­rent news. I was think­ing that if in five to 10 years, or even 20 years, AI sys­tems be­come in­creas­ingly ad­vanced and pow­er­ful — which means there could be tremen­dous up­sides for ev­ery­one to share, as well as down­sides to worry about — the in­cen­tive to co­op­er­ate, or at least aim for “coop­er­ti­tion,” should be there.

It could be in­ter­est­ing to think about game the­ory. I won’t go into the tech­ni­cal de­tails. But the ba­sic idea is that if there are tremen­dous up­sides and also shared down­sides for some num­ber of par­ties, then it is more likely that those par­ties will be will­ing to co­op­er­ate in­stead of just com­pete.

Nathan: A ques­tion from the au­di­ence: Do you think that there’s any way to tell right now whether the U.S. or the West (how­ever you pre­fer to think about that), has an edge over China in de­vel­op­ing AI? And do you think that there are poli­ti­cal or cul­tural differ­ences that con­tribute to that, if you think such a differ­ence ex­ists?

Brian: Just in terms of the po­ten­tial for de­vel­op­ing ca­pa­ble sys­tems? We are not talk­ing about safety and ethics, right?

Nathan: You can in­ter­pret the ques­tion [how you like].

Brian: Okay. I will fo­cus on ca­pa­bil­ities. Cur­rently, it is quite clear to me that China is nowhere near the U.S. in terms of over­all AI ca­pa­bil­ities. Peo­ple have ar­gued at length. I would add a few things.

If you look at the lead­er­ship struc­ture of Chi­nese AI com­pa­nies — for ex­am­ple, Ten­cent — and some of the re­cent de­vel­op­ments, it seems like the in­cen­tive to de­velop ad­vanced and in­ter­est­ing the­o­ret­i­cal re­search is not re­ally there. Chi­nese AI com­pa­nies are much more fo­cused on prod­ucts and near-term profit.

One ex­am­ple I would give is the Ten­cent AI lab di­rec­tor, Dr. Tong Zhang, who was quite in­ter­ested in ideas rele­vant to AGI and worked at Ten­cent for two years. He de­cided to leave the AI lab ear­lier this year and is now go­ing back to academia. He is join­ing the Hong Kong Univer­sity of Science and Tech­nol­ogy as a fac­ulty mem­ber. Even though he didn’t ex­plic­itly men­tion the rea­son [for his de­par­ture], peo­ple think that the in­cen­tive to de­velop long-term, in­ter­est­ing re­search is not there at Ten­cent or, hon­estly, at many of the AI com­pa­nies.

Another point I will raise is this: If you look at some of the U.S. AI labs — for ex­am­ple, FAIR or Google Brain — the typ­i­cal struc­ture is that you have two re­search sci­en­tists and one re­search en­g­ineer on a team. The num­ber could be greater, but the ra­tio is usu­ally the same. But the ra­tio of re­search sci­en­tists to re­search en­g­ineers is the op­po­site for Chi­nese AI com­pa­nies. There, you have one re­search sci­en­tist and two re­search en­g­ineers, which im­plies that they are much more fo­cused on putting their re­search ideas into prac­tice and ap­pli­ca­tions.

Nathan: That’s a sur­pris­ing an­swer to me be­cause I think that the naive, “New York Times reader” point of view would be that the Chi­nese gov­ern­ment is way bet­ter than the U.S. gov­ern­ment in terms of long-term plan­ning and pri­or­ity-set­ting. If you agree with that, how do you think that trans­lates into a sce­nario where the Chi­nese mega com­pa­nies are maybe not do­ing as much as the Amer­i­can com­pa­nies?

Brian: I think the Chi­nese model is still in­ter­est­ing from a long-term, mega-pro­ject per­spec­tive. But there is var­i­ance in terms of what type of mega pro­jects you are talk­ing about. If you’re talk­ing about railways, bridges, or in­fras­truc­ture in gen­eral, the Chi­nese gov­ern­ment is in­cred­ibly good at that. You can con­struct a lot of build­ings in just days, and I think that it takes the U.S., UK, and many other gov­ern­ments years. But they are en­g­ineer­ing pro­jects. We’re not talk­ing about No­bel Prize-win­ning types of pro­jects. I think that’s re­ally the differ­ence.

There is some anal­y­sis on where the top AI ma­chine learn­ing re­searchers are work­ing and all of them are in the U.S. But if you look at pretty good re­searchers — po­ten­tially Alan Tur­ing Prize-win­ning re­searchers — then yes, China has a lot of them. I think we have to be very nu­anced in terms of look­ing at what types of sci­en­tific pro­jects we are talk­ing about, and whether it’s mostly about sci­en­tific break­throughs or en­g­ineer­ing challenges.

Nathan: Fas­ci­nat­ing. A bunch of ques­tions are com­ing in. I’m go­ing to do my best to get through as many as I can. One ques­tion is about the gen­eral frac­tur­ing of the world that seems to be hap­pen­ing, or bifur­ca­tion of the world, into a Chi­nese sphere of in­fluence (which might just be China, or maybe it in­cludes a few sur­round­ing coun­tries), and then the rest of the world. We’re see­ing Chi­nese tech­nol­ogy com­pa­nies get­ting banned from Amer­i­can net­works, and so on. Do you think that that is go­ing to be­come a huge prob­lem? Is it already a huge prob­lem, or is it not that big of a prob­lem af­ter all?

Brian: It’s definitely con­cern­ing. My main con­cern is the im­pact on the in­ter­na­tional re­search com­mu­nity. [In my talk], I al­luded to the in­ter­na­tional and in­ter­con­nected com­mu­nity of re­search labs and ma­chine-learn­ing re­searchers. I be­lieve that com­mu­nity will still be a good mechanism for co­or­di­nat­ing on differ­ent AI policy is­sues —they would be great at rais­ing con­cerns through the AI Open Let­ter Ini­ti­a­tive, col­lab­o­rat­ing through work­shops, and so on.

But this larger poli­ti­cal dy­namic might af­fect them in terms of Chi­nese sci­en­tists’ abil­ity to travel to the U.S. What if they just can’t get Visas? And maybe in the fu­ture, U.S. sci­en­tists might also be wor­ried about get­ting as­so­ci­ated with Chi­nese in­di­vi­d­u­als. The thing I’m wor­ried about is re­ally this chan­nel of com­mu­ni­ca­tion be­tween the re­search com­mu­ni­ties. Hope­fully, that will change.

Nathan: You’re an­ti­ci­pat­ing the next ques­tion, which is the idea that in­di­vi­d­u­als are maybe start­ing to be­come con­cerned that if they ap­pear to be on ei­ther side of the China/​Amer­ica di­vide — if they ap­pear too friendly — they’ll be viewed very sus­pi­ciously and might suffer con­se­quences from that. Do you think that is already a prob­lem, and if so, what can in­di­vi­d­u­als do to try to bridge this di­vide while min­i­miz­ing the con­se­quences that they might suffer?

Brian: It’s hard to provide a gen­eral an­swer. It prob­a­bly de­pends a lot on the ca­reer tra­jec­to­ries of in­di­vi­d­u­als and other con­straints.

Nathan: There’s a ques­tion about the Com­mu­nist Party. The ques­tioner as­sumes that the Com­mu­nist Party has fi­nal say on ev­ery­thing that’s go­ing on in China. I won­der if you think that’s true, and if it is, how do we work within that con­straint?

Brian: In terms of in­ter­na­tional col­lab­o­ra­tion and what might be plau­si­ble?

Nathan: Is there any way to make progress with­out the buy-in of the Com­mu­nist Party, or do you need it? And if you need it, how do you get it?

Brian: I think one as­sump­tion there is that it is bad to have in­volve­ment from the gov­ern­ment. I think we need to try to avoid that — I can just smell the as­sump­tions when peo­ple ask these types of ques­tions. It is not nec­es­sar­ily true. I think there are ways that the Chi­nese gov­ern­ment can be in­volved mean­ingfully. We just need to be think­ing about what those spaces are.

Again, one promis­ing chan­nel would be AI safety con­fer­ences through academia. If Ts­inghua Univer­sity is in­ter­ested in or­ga­niz­ing an AI safety con­fer­ence with po­ten­tial buy-in from the gov­ern­ment, I think that’s fine, and I think it’s still a venue for re­search col­lab­o­ra­tion. The world just needs to think about what the mu­tual in­ter­ests are and, hon­estly, the mag­ni­tude of the stakes.

Nathan: At a min­i­mum, the Com­mu­nist Party has at least demon­strated aware­ness of these is­sues and seems to be think­ing about them. I think we’re a lit­tle bit over time already, so maybe just one last ques­tion. Do you see this com­pe­ti­tion/​co­op­er­a­tion dy­namic and po­ten­tially this race to the precipice dy­nam­ics get­ting re­peated across a lot of things? There’s AI, and ob­vi­ously in an ear­lier era there was nu­clear ri­valry, which hasn’t nec­es­sar­ily gone away ei­ther. We also saw this news item of the first CRISPR-ed­ited ba­bies, and that was a source of a lot of con­cern for peo­ple who thought, “We’re los­ing con­trol of this tech­nol­ogy.” So, what’s the port­fo­lio of these sorts of po­ten­tial race-dy­namic prob­lems?

Brian: I think these are rele­vant his­tor­i­cal analogs, but what makes AI a lit­tle bit differ­ent is that AI is a gen­eral-pur­pose tech­nol­ogy, or omni-use tech­nol­ogy. It’s used across the econ­omy. It’s a ques­tion of poli­ti­cal and eco­nomic [im­por­tance], not just in­ter­na­tional se­cu­rity. It’s not just a nu­clear weapon or a space weapon. It’s ev­ery­where. It’s more like elec­tric­ity in the in­dus­trial rev­olu­tion.

One thing that I want to add, which is re­lated to the pre­vi­ous ques­tion, is the re­sponse from Chi­nese sci­en­tists to the gene-edit­ing in­ci­dent. Many peo­ple con­demned the be­hav­ior of the sci­en­tist [re­spon­si­ble for the gene edit­ing] be­cause he didn’t [com­ply fully] with reg­u­la­tions and was just do­ing it at a small lab in the city. But what you can see there is this unifor­mity of an in­ter­na­tional re­sponse to the in­ci­dent; the re­sponses from U.S. sci­en­tists, UK sci­en­tists, and Chi­nese sci­en­tists were ba­si­cally the same. There was an open let­ter to Na­ture, with hun­dreds and hun­dreds of Chi­nese sci­en­tists say­ing that this be­hav­ior is un­ac­cept­able.

What fol­lowed was that the Chi­nese gov­ern­ment wanted to de­velop bet­ter reg­u­la­tions for gene edit­ing and [ex­plore] the rele­vant ethics. I think this illus­trates that we can have a much more global di­alogue about ethics and safety in sci­ence and tech­nol­ogy. And in some cases, the Chi­nese gov­ern­ment is in­ter­ested in join­ing this global di­alogue, and takes ac­tion in its do­mes­tic policy.

No comments.