Brian Tse: Risks from Great Power Conflicts

War be­tween the world’s great pow­ers sharply in­creases the risk of a global catas­tro­phe: nu­clear weapon use be­comes more likely, as does the de­vel­op­ment of other un­safe tech­nol­ogy. In this talk from EA Global 2018: Lon­don, Brian Tse ex­plores the pre­ven­tion of great power con­flict as a po­ten­tial cause area.

Below is a tran­script of Brian’s talk, which CEA has lightly ed­ited for clar­ity. You can also read this talk on the effec­tivealtru­, or watch it on YouTube.

The Talk

Many peo­ple be­lieve that we are liv­ing in the most peace­ful pe­riod of hu­man his­tory. John Lewis Gad­dis pro­claimed that we live in a Long Peace pe­riod, be­gin­ning at the end of the Se­cond World War.

1400 Brian Tse (1)

Steven Pinker fur­ther pop­u­larized the idea of Long Peace in his book, The Bet­ter An­gels of our Na­ture, and ex­plained the pe­riod by point­ing to the pacify­ing forces of trade, democ­racy, and in­ter­na­tional so­ciety.

1400 Brian Tse (2)

This graph shows the per­centage of time when great pow­ers have been at war with each other. 500 years ago, the great pow­ers were al­most always fight­ing each other. How­ever, the fre­quency has de­clined steadily.

1400 Brian Tse (3)

This graph, how­ever, shows the dead­li­ness of war, which shows a trend that goes into the op­po­site di­rec­tion. Although great pow­ers go to war with each other less of­ten, the wars that do hap­pen are more dam­ag­ing.

1400 Brian Tse (4)

The dead­li­ness trend did an about-face af­ter the Se­cond World War. For the first time in mod­ern hu­man his­tory, great power con­flicts were fewer in num­ber, shorter in du­ra­tion, and less deadly. Steven Pinker ex­pects the trend to con­tinue.

1400 Brian Tse (5)

Not ev­ery­one agrees with this op­ti­mistic pic­ture. Nas­sim Taleb be­lieves that great power con­flict on the scale of 10 mil­lion ca­su­alties only hap­pens once ev­ery cen­tury. The Long Peace pe­riod only cov­ers 70 years, so what ap­pears to be a de­cline in vi­o­lent con­flict could merely be a gap be­tween ma­jor wars. In his pa­per on the statis­ti­cal prop­er­ties and tail risk of vi­o­lent con­flict, Taleb con­cludes that no statis­ti­cal trend can be as­serted. The idea is that ex­trap­o­lat­ing on the ba­sis of his­tor­i­cal data as­sumes that there is no qual­i­ta­tive change to the na­ture of the sys­tem pro­duc­ing that data, whereas many peo­ple be­lieve that nu­clear weapons con­sti­tutes a ma­jor change to the data-gen­er­at­ing pro­cess.

1400 Brian Tse (6)

Some other ex­perts seem to share a more sober pic­ture than Pinker. In 2015, there was a poll done among 50 in­ter­na­tional re­la­tions ex­perts from around the world. 60% of them be­lieved that risk has in­creased in the last decade. 52% be­lieve that nu­clear great power con­flict would in­crease in the next 10 years. Over­all, the ex­perts gave a me­dian 5% chance of a nu­clear great power con­flict kil­ling at least 80 mil­lion peo­ple in the next 20 years. And then there are some in­ter­na­tional re­la­tions the­o­ries which sug­gest a lower bound of risk.

1400 Brian Tse (7)

The Tragedy of Great Power Poli­tics pro­poses the the­ory of offen­sive re­al­ism. This the­ory says that great pow­ers always seek to achieve re­gional hege­mony, max­i­mize wealth, and achieve nu­clear su­pe­ri­or­ity. Through this pro­cess great power con­flicts will never see an end. Another book, The Clash of Civ­i­liza­tions, sug­gests that the con­flicts be­tween ide­olo­gies dur­ing the Cold War era are now be­ing re­placed by con­flicts be­tween an­cient civ­i­liza­tions.

1400 Brian Tse (8)

In the 21st cen­tury, the rise of non-Western so­cieties pre­sents plau­si­ble sce­nar­ios of con­flict. And then, there’s some emerg­ing dis­course on the Thucy­dides’ Trap, which points to the struc­tural pat­tern of stress when a ris­ing power challenges a rul­ing one. In an­a­lyz­ing the Pelo­pon­nesian War that dev­as­tated An­cient Greece, the his­to­rian Thucy­dides ex­plained that it was the rise of Athens, and the fear that this in­stil­led in Sparta, that made war in­evitable.

1400 Brian Tse (9)

In Gra­ham Alli­son’s re­cent book, Destined for War, he points out that this lens is cru­cial for un­der­stand­ing China-US re­la­tions in the 21st cen­tury.

1400 Brian Tse (10)

So, these per­spec­tives sug­gest that we should be rea­son­ably alert about po­ten­tial risks for great power con­flict, but how bad would these con­flicts be?

For the pur­pose of my talk, I first define con­tem­po­rary great pow­ers. They are US, UK, France, Rus­sia, and China. Th­ese are the five coun­tries that have per­ma­nent seats, and veto power on the UN Se­cu­rity Coun­cil. There are also the only five coun­tries that are for­mally rec­og­nized as nu­clear weapon states. Col­lec­tively, they ac­count for more than half of global mil­i­tary spend­ing.

1400 Brian Tse (11)

We should ex­pect con­flict be­tween great pow­ers to be quite tragic. Dur­ing the Se­cond World War, 50 to 80 mil­lion peo­ple died. By some mod­els, these wars cost on the or­der of na­tional GDPs, and are likely to be sev­eral times more ex­pen­sive. They also pre­sents a di­rect ex­tinc­tion risk.

1400 Brian Tse (12)

At a Global Catas­trophic Risk Con­fer­ence hosted by the Univer­sity of Oxford, aca­demics pre­dicted that there is 1% chance of nu­clear ex­tinc­tion risk in the 21st cen­tury. The cli­matic effects of nu­clear wars are not very well un­der­stood, so nu­clear win­ter pre­sents a plau­si­ble sce­nario of ex­tinc­tion risk. Although, it’s also im­por­tant to take model un­cer­tainty into ac­count in any risk anal­y­sis.

1400 Brian Tse (13)

One way to think about great power con­flict is as a risk fac­tor, in the same way that to­bacco use is a risk fac­tor for the global bur­den of dis­eases. Tobacco use can lead to a wide range of sce­nar­ios of death, in­clud­ing lung can­cer. Similarly, great power con­flicts can lead to a wide range of differ­ent ex­tinc­tion sce­nar­ios. One ex­am­ple is nu­clear win­ter, fol­lowed by mass star­va­tion.

1400 Brian Tse (14)

Others are less ob­vi­ous, which could arise due to failures of global co­or­di­na­tion. Let’s con­sider the de­vel­op­ment of ad­vanced AI as an ex­am­ple. Wars typ­i­cally cause faster tech­nolog­i­cal de­vel­op­ment, of­ten en­hanced by pub­lic in­vest­ment. Coun­tries be­come more will­ing to take risks in or­der to de­velop tech­nol­ogy first. One ex­am­ple was the de­vel­op­ment of a nu­clear weapons pro­gram by In­dia af­ter go­ing to war with China in 1962.

1400 Brian Tse (15)

Re­peat­ing the same com­pet­i­tive dy­namic in the area of ad­vanced AI is likely to be catas­trophic. Ac­tors may trade-off safety re­search and im­ple­men­ta­tion in the pro­cess, and that might pre­sent a ex­tinc­tion risk, as dis­cussed in the book Su­per­in­tel­li­gence.

1400 Brian Tse (16)

Now, how ne­glected is the prob­lem? I de­vel­oped a frame­work to help eval­u­ate this ques­tion.

1400 Brian Tse (17)

First, I make a dis­tinc­tion be­tween broad ver­sus spe­cific in­ter­ven­tions. By broad in­ter­ven­tions I roughly mean pro­mot­ing in­ter­na­tional co­op­er­a­tion and peace, and this could be by im­prov­ing diplo­macy and con­flict re­s­olu­tion. With spe­cific in­ter­ven­tions, there are two cat­e­gories of con­ven­tional risk ver­sus emerg­ing risk. I define con­ven­tional risk by those that are stud­ied by in­ter­na­tional re­la­tions ex­perts and na­tional se­cu­rity pro­fes­sion­als. So, chem­i­cal, biolog­i­cal, ra­diolog­i­cal, and nu­clear risk, col­lec­tively known as CBRN in the com­mu­nity.

1400 Brian Tse (18)

And then there are some novel con­cerns aris­ing from emerg­ing tech­nolo­gies, such as the de­vel­op­ment and de­ploy­ment of geo­eng­ineer­ing. Now, let’s go back to the frame­work that I used to com­pare ex­is­ten­tial risk to the global bur­den of dis­eases. Lower to­bacco tax can lead to an in­creased rate of smok­ing. Similarly, de­vel­op­ment of emerg­ing tech­nolo­gies such as geo­eng­ineer­ing can lead to greater con­flict be­tween great pow­ers, or lead to wars in the first place. Now in the up­com­ing decades, I think that it’s plau­si­ble to see the fol­low­ing sce­nar­ios.

1400 Brian Tse (19)

Pri­vate in­dus­try play­ers are already set­ting their sights on space min­ing; ma­jor space-far­ing coun­tries in the fu­ture may com­pete for the available re­sources on the moon and as­ter­oids. Mili­tary ap­pli­ca­tions of molec­u­lar nan­otech­nol­ogy could be even more desta­bi­liz­ing than nu­clear weapons. Such tech­nol­ogy will al­low for tar­geted de­struc­tion dur­ing at­tack, and also al­low for greater un­cer­tainty about the ca­pa­bil­ities of an ad­ver­sary.

With geo­eng­ineer­ing, ev­ery tech­nolog­i­cally ad­vanced na­tion could change the tem­per­a­ture of the planet. Any unilat­eral ac­tion taken by coun­tries could lead to dis­agree­ment and con­flict be­tween them. Gene-edit­ing will al­low for a large-scale eu­gen­ics pro­gram, which could lead to a bio-eth­i­cal panic in the rest of the world. Other coun­tries might be wor­ried about their na­tional se­cu­rity in­ter­est, be­cause of the un­even dis­tri­bu­tion of hu­man cap­i­tal and power. Now, it seems that these emerg­ing sources of risk are likely to be quite ne­glected, but what about broad in­ter­ven­tions and con­ven­tional risks?

1400 Brian Tse (20)

It seems that poli­ti­cal at­ten­tion and re­sources have been de­voted to the prob­lem. There are anti-war and peace move­ments around the world ad­vo­cat­ing for diplo­macy, and the sup­port of anti-war poli­ti­cal can­di­dates. There are also some aca­demic dis­ci­plines, such as in­ter­na­tional re­la­tions and se­cu­rity stud­ies, that are helpful for mak­ing progress on the is­sue. Govern­ments also have the in­ter­est to main­tain peace.

1400 Brian Tse (21)

The US gov­ern­ment has tens of billions in the bud­get for nu­clear se­cu­rity is­sues, and pre­sum­ably a frac­tion of it is ded­i­cated to the safety, con­trol, and de­tec­tion of nu­clear risk. Then, there are also some in­ter-gov­ern­men­tal or­ga­ni­za­tions that put aside fund­ing for im­prov­ing nu­clear se­cu­rity. One ex­am­ple is the In­ter­na­tional Atomic En­ergy Agency.

1400 Brian Tse (22)

But it seems plau­si­ble to me that there are still some ne­glected niches. Ac­cord­ing to a re­port of nu­clear weapons policy done by the Open Philan­thropy Pro­ject, some of the biggest gaps in the space are out­side of the US and US-based ad­vo­cacy. In a re­port that com­pre­hen­sively stud­ies US-China re­la­tions and their char­ter diplo­macy pro­grams, the re­port con­cludes that some rele­vant think tanks are ac­tu­ally con­strained by a com­mit­ted source of fund­ing from foun­da­tions in­ter­ested in the area. Since most of re­search on nu­clear weapons policy is done on be­half of gov­ern­ments and thus could be tied to na­tional in­ter­est, it seems more use­ful to fo­cus on pub­lic in­ter­est from philan­thropy and non­profit per­spec­tive. One ex­am­ple is the Stock­holm In­ter­na­tional Peace Re­search In­sti­tute. With that per­spec­tive, it seems that the space could be more ne­glected than it seems.

1400 Brian Tse (23)

Now, let’s turn to as­sess­ment of solv­abil­ity. This is the vari­able that I’m most un­cer­tain about, so what I’m go­ing to say is pretty spec­u­la­tive. By re­view­ing liter­a­ture, it seems that there are some lev­ers that could be used to pro­mote peace and re­duce the risk of great power con­flicts.

1400 Brian Tse (24)

Let’s be­gin with broad in­ter­ven­tions. First, you can pro­mote in­ter­na­tional di­alogue and con­flict re­s­olu­tion. A case study was that dur­ing the Cold War, five great pow­ers, in­clud­ing Ja­pan, France, Ger­many, the UK, and the US de­cided that a state of peace was de­sir­able. After the Cuban Mis­sile Cri­sis, they ba­si­cally re­solved the dis­pute in the United Na­tions and other in­ter­na­tional fo­rums for dis­cus­sions. How­ever, one could ar­gue that pro­mot­ing di­alogue is un­likely to be use­ful if there is no pre-al­ign­ment of in­ter­est.

1400 Brian Tse (25)

Another lever is pro­mot­ing in­ter­na­tional trade. The book Eco­nomic In­ter­de­pen­dence and War sug­gests the the­ory of trade ex­pec­ta­tions in pre­dict­ing whether in­creased trade could re­duce risk of war. If state lead­ers have pos­i­tive ex­pec­ta­tions about the fu­ture, then they would be­lieve in the benefits of peace, and see the high cost of war. How­ever, if they fear eco­nomic de­cline and the po­ten­tial loss to for­eign trade and in­vest­ment, then they might be­lieve that war now is ac­tu­ally bet­ter than sub­mis­sion later. So it is prob­a­bly mis­taken to be­lieve that pro­mot­ing trade in gen­eral is ro­bustly use­ful, if you only do it un­der spe­cific cir­cum­stances.

1400 Brian Tse (26)

Within spe­cific and con­ven­tional risk, it seems that work on in­ter­na­tional arms con­trol may im­prove sta­bil­ity. Re­cently the non­profit In­ter­na­tional Cam­paign to Abol­ish Nu­clear Weapons brought about a treaty on the pro­hi­bi­tion of nu­clear weapons. They were awarded the No­bel Peace Prize in 2017.

1400 Brian Tse (27)

Re­cently, there’s also a cam­paign to bring nu­clear weapons off hair-trig­ger alert. How­ever, the cam­paign and the treaty have not been ex­e­cuted for that long, so the im­pacts of these ini­ti­a­tives are yet to be seen. With the emerg­ing sources of risk, it seems that the space is heav­ily bot­tle­necked by un­der-defined and en­tan­gled re­search ques­tions. It’s pos­si­ble to make progress on this is­sue by just find­ing out what are the most im­por­tant ques­tions in the space, and what the struc­ture of the space is like.

1400 Brian Tse (28)

Now, what are the im­pli­ca­tions for the effec­tive al­tru­ism com­mu­nity?

1400 Brian Tse (29)

Many peo­ple in the com­mu­nity be­lieve that im­prov­ing the long-term fu­ture of civ­i­liza­tion is one of the best ways to make a huge, pos­i­tive im­pact.

1400 Brian Tse (30)

Both the Open Philan­thropy Pro­ject and 80,000 Hours have ex­pressed the view that re­duc­ing great power con­flicts, and im­prov­ing in­ter­na­tional peace, could be promis­ing ar­eas to look into.

1400 Brian Tse (31)

Through­out the talk I ex­pressed my view through the fol­low­ing ar­gu­ments:

  1. It seems that the idea of Long Peace is overly op­ti­mistic, as sug­gested by di­verse per­spec­tives of tech­ni­cal anal­y­sis, ex­pert fore­cast­ing, and in­ter­na­tional re­la­tions the­o­ries.

  2. Great power con­flicts can be un­der­stood as a risk fac­tor that could lead to hu­man ex­tinc­tion ei­ther di­rectly, such as through nu­clear win­ter, or in­di­rectly, through a wide range of sce­nar­ios.

  3. It seems that there are some ne­glected niches that arise from the de­vel­op­ment of novel emerg­ing tech­nolo­gies. I gave ex­am­ples of molec­u­lar nan­otech­nol­ogy, gene-edit­ing, and space min­ing.

  4. I’ve ex­pressed sig­nifi­cant un­cer­tainty about the solv­abil­ity of the is­sue, how­ever, my best guess is that do­ing some dis­en­tan­gle­ment re­search is likely to be some­what use­ful.

1400 Brian Tse (32)

Ad­di­tion­ally, it seems that there are com­par­a­tive ad­van­tage for the EA com­mu­nity to work on this prob­lem. A lot of peo­ple in the com­mu­nity share strong cos­mopoli­tan val­ues, which could be use­ful for fos­ter­ing in­ter­na­tional col­lab­o­ra­tion rather than be­ing at­tached to na­tional in­ter­ests and na­tional iden­tities. The com­mu­nity can also bring in the cul­ture of ex­plicit pri­ori­ti­za­tion and long-ter­mist per­spec­tives to the field, and then, some peo­ple in the com­mu­nity are also fa­mil­iar with con­cepts such as The Unilat­er­al­ist’s Curse, In­for­ma­tion Hazard, and Differ­en­tial Tech­nolog­i­cal Progress, which could be use­ful for an­a­lyz­ing emerg­ing tech­nolo­gies and their as­so­ci­ated risk.

1400 Brian Tse (33)

All things con­sid­ered, it seems to me that risks from great power con­flicts re­ally could be the Cause X that William MacAskill talks about. In this case, it wouldn’t be a moral prob­lem that we have not dis­cov­ered. In­stead, it would be some­thing that we’re aware of to­day, but for bad rea­sons, we de­pri­ori­tized. Now my main recom­men­da­tion here is that a whole lot more re­search should be done, so this is a small list of po­ten­tial re­search ques­tions.

1400 Brian Tse (34)

I hope this talk can serve as a start­ing point for more con­ver­sa­tions and re­search on the topic. Thank you.


Nathan: Well, that’s scary! How much do you pay at­ten­tion to cur­rent news, like 2018 ver­sus the much zoomed out pic­ture of the cen­tury timeline that you showed?

Brian: I don’t think I pay that much at­ten­tion to cur­rent news, but I also don’t look at this prob­lem just on a cen­tury timeline per­spec­tive. I guess from the pre­sen­ta­tion, it would be some­thing that is pos­si­ble in the next two to three decades. I think that more re­search should be done on emerg­ing tech­nolo­gies, but it seems with space min­ing, with geo­eng­ineer­ing, these are pos­si­ble in the next 10 to 15 years, but I’m not sure whether pay­ing at­ten­tion to the ev­ery­day poli­ti­cal trends would be the most effec­tive use of the time of effec­tive al­tru­ists in terms of an­a­lyz­ing long-term trends.

Nathan: Yeah. It seems also that a lot of the sce­nar­ios that you’re talk­ing about re­main risks even if the re­la­tion­ships be­tween great pow­ers are su­perfi­cially quite good, be­cause the ma­jor­ity of the risk is not even in di­rect hot con­flict, but in other things go­ing wrong via ri­valry and es­ca­la­tion. Is that how you see it as well?

Brian: Yeah, I think so. I think the rea­son why I said that it seems like there is some ne­glected niche in the is­sue, is that most of the in­ter­na­tional re­la­tions ex­perts and schol­ars are not pay­ing at­ten­tion to these emerg­ing tech­nolo­gies. And these tech­nolo­gies could re­ally change the struc­ture and the in­cen­tive of the coun­tries, so even if China-US re­la­tions ap­pear to be… well, that’s is a pretty bad ex­am­ple be­cause now it’s not go­ing that well, but sup­pose in a few years some in­ter­na­tional re­la­tions ap­pear to be pretty pos­i­tive, the de­vel­op­ment of pow­er­ful tech­nolo­gies could still just change dy­nam­ics from that state.

Nathan: Have there been a lot of near misses? We know about a few of the nu­clear near misses. Have there been other kinds of near misses where great pow­ers nearly en­tered into con­flict, but didn’t?

Brian: Yeah. I think one pa­per shows that there were al­most 40 near misses, and I think that was put up by the Fu­ture of Life In­sti­tute, so some peo­ple can look up that pa­per, and I think that in gen­eral it seems that ex­perts agree some of the biggest risks from nu­clear would be ac­ci­den­tal use, rather than de­liber­ate and mal­i­cious use be­tween coun­tries. That might be some­thing that peo­ple should look into, just on im­prov­ing the de­tec­tion sys­tems and im­prov­ing the tech­ni­cal ro­bust­ness of the re­port­ing, and so forth.

Nathan: It seems like one fairly ob­vi­ous ca­reer path that might come out of this anal­y­sis would be to go into the civil ser­vice and try to be a good stew­ard of the gov­ern­ment ap­para­tus. What do you think of that, and are there other ca­reer paths that you have iden­ti­fied that you think peo­ple should be con­sid­er­ing as they worry about the same things you’re wor­ry­ing about?

Brian: Yeah. I think apart from civil ser­vices, work­ing at think tanks seems also plau­si­ble, and if you are par­tic­u­larly in­ter­ested in the de­vel­op­ment of emerg­ing tech­nolo­gies like the ex­am­ples I have given, then it seems that there are some rele­vant EA or­ga­ni­za­tions that would be in­ter­ested. FHI would be one ex­am­ple, and I think do­ing some in­de­pen­dent re­search could also be some­what use­ful, es­pe­cially if we are still in a stage of dis­en­tan­gling the space. It would be good to find out what some of the most promis­ing top­ics are to fo­cus on.

Ques­tion: What effect do you think cli­mate change has on the risk of great power con­flicts?

Brian: I think that one sce­nario that I’m wor­ried about would be geo­eng­ineer­ing. Geo­eng­ineer­ing is like a plan B for deal­ing with cli­mate change, and I think that there is a de­cent chance that the world won’t be able deal with cli­mate change in time oth­er­wise. In that case, we would need to figure out a mechanism in which coun­tries can co­op­er­ate and gov­ern the de­ploy­ment of geo­eng­ineer­ing. One ex­am­ple would be, China and In­dia are ge­o­graph­i­cally very close, and if one of them de­cided to de­ploy the geo­eng­ineer­ing tech­nolo­gies, that would also af­fect the cli­matic in­ter­est the other. So, dis­agree­ment and con­flict be­tween these two coun­tries could be quite catas­trophic.

Nathan: What do you think the role in the fu­ture will be for in­ter­na­tional or­ga­ni­za­tions like the UN? Are they too slow to be effec­tive, or do you think they have an im­por­tant role to play?

Brian: I am a lit­tle bit skep­ti­cal about the roles of these in­ter­na­tional or­ga­ni­za­tions, es­pe­cially be­cause of two rea­sons. One is that these emerg­ing tech­nolo­gies are be­ing de­vel­oped very quickly, and so if you look at AI, I think that non­prof­its and civil so­ciety ini­ti­a­tives and firms will be able to re­spond to these changes much more quickly, in­stead of go­ing through all the bu­reau­cracy of UN. Also, it seems that his­tor­i­cally nu­clear weapons and bio-weapons were mostly driven by the de­vel­op­ment of coun­tries, but with AI, and pos­si­bly with space min­ing, per­haps with gene-edit­ing, pri­vate firms are go­ing to play a sig­nifi­cant role. I think I would be keen to ex­plore other mod­els, such as multi-stake­holder mod­els, firm-to-firm, or lab-to-lab col­lab­o­ra­tion. And also pos­si­bly the role of epistemic com­mu­ni­ties be­tween re­searchers in differ­ent coun­tries, and just get them to be in the same room and agree with a set of prin­ci­ples. The Asilo­mar Prin­ci­ples reg­u­lated biotech­nol­ogy 30 years ago, and now we have a con­verg­ing dis­course and con­sen­sus around a Asilo­mar Con­fer­ence on AI, so I think peo­ple should ex­port these con­fi­dence mod­els in the fu­ture as well.

Nathan: A seem­ingly im­por­tant fac­tor in the Euro­pean peace since World War II has been a sense of Euro­pean iden­tity, and a shared com­mit­ment to that. Do you think that it is pos­si­ble or de­sir­able to cre­ate a global sense of iden­tity that ev­ery­one can be­long to?

Brian: Yeah, this is quite com­pli­cated. I think that there are two pieces to it. First, the cre­ation of a global gov­er­nance model may ex­ag­ger­ate the risk of global per­ma­nent to­tal­i­tar­i­anism, so that’s a down­side that peo­ple should be aware of. But at the same time, there are benefits of global gov­er­nance in terms of bet­ter co­op­er­a­tion and se­cu­rity that seem to be re­ally nec­es­sary for reg­u­lat­ing the de­vel­op­ment of syn­thetic biol­ogy. So, a more wide­spread use of surveillance might be nec­es­sary in the fu­ture, and peo­ple should not dis­re­gard this pos­si­bil­ity. I’m pretty un­cer­tain about what the trade-off is there, but peo­ple should be aware of this trade-off and keep do­ing re­search on this.

Nathan: What is your vi­sion for suc­cess? That is to say, what’s the most likely sce­nario in which global great power con­flict is avoided? Is the hope just to man­age the cur­rent sta­tus quo effec­tively, or do we re­ally need a sort of new paradigm or a new world or­der to take shape?

Brian: I guess I am hope­ful for co­op­er­a­tion based on a con­sen­sus on the fu­ture as a world of abun­dance. I think that a lot of frame­work that went into my pre­sen­ta­tion was around reg­u­lat­ing and min­i­miz­ing the down­side risk, but I think it’s pos­si­ble to foster in­ter­na­tional co­op­er­a­tion around the pos­i­tive fu­ture. Just look at how much good we can cre­ate with safe and benefi­cial AI. We can po­ten­tially have uni­ver­sal ba­sic in­come. If we co­op­er­ate on space min­ing, then we can go to the space and just have amaz­ing re­sources in the cos­mos. I think that if peo­ple have an emerg­ing view on the huge benefits of co­op­er­a­tion, and the ir­ra­tional­ity of con­flict, then it’s pos­si­ble to see a pretty bright fu­ture.