A New X-Risk Factor: Brain-Computer Interfaces

This post iden­ti­fies a new ex­is­ten­tial risk fac­tor which has not been recog­nised in prior liter­a­ture; Brain-Com­puter in­ter­faces. Brain-Com­puter in­ter­faces re­fer to tech­nolo­gies which al­low the brain to in­ter­face di­rectly with an ex­ter­nal de­vice. In par­tic­u­lar, BCIs have been de­vel­oped to read men­tal and emo­tional con­tent from neu­ral ac­tivity, and have been de­vel­oped to in­ten­tion­ally stim­u­late or in­hibit cer­tain kinds of brain ac­tivity. At pre­sent BCIs are used pri­mar­ily for ther­a­peu­tic pur­poses, but their po­ten­tial use case is much wider.

We recog­nise that this sounds some­what like sci­ence fic­tion. Skep­ti­cism here is war­ranted. How­ever, the cur­rent state of the tech­nol­ogy is much more ad­vanced than most are aware of. In par­tic­u­lar, well-cor­rob­o­rated re­search pro­to­types already ex­ist (Moses et al 2019; Guen­ther et al 2009); a num­ber of com­pa­nies, in­clud­ing Face­book and Neu­ral­ink are work­ing to com­mer­cial­ise this tech­nol­ogy over the com­ing decades (Con­s­tine, 2017; Musk 2019); and there is wide­spread agree­ment among BCI re­searchers that this tech­nol­ogy is not just fea­si­ble, but will be on mar­ket in the near fu­ture (Nijboer et al 2011). The risks this tech­nol­ogy poses how­ever, have been al­most en­tirely ne­glected.

This pa­per will out­line how the de­vel­op­ment and wide­spread de­ploy­ment of BCIs could sig­nifi­cantly raise the like­li­hood of longterm global to­tal­i­tar­i­anism. We sug­gest two main meth­ods of im­pact. Firstly, BCIs will al­low for an un­par­alleled ex­pan­sion of surveillance, as they will en­able states (or other ac­tors) to surveil even the men­tal con­tents of their sub­jects. Se­condly, BCIs will make it eas­ier than ever for to­tal­i­tar­ian dic­ta­tor­ships to po­lice dis­sent by us­ing brain stim­u­la­tion to pun­ish dis­sent­ing thoughts, or even make cer­tain kinds of dis­sent­ing thought a phys­i­cal im­pos­si­bil­ity.

At pre­sent, this risk fac­tor has been en­tirely un­no­ticed by the X-Risk com­mu­nity. We sug­gest that given the high like­li­hood of it’s im­pact, and the pos­si­ble mag­ni­tude of such an im­pact, it de­serves more at­ten­tion, more re­search, and more dis­cus­sion.

1. Definitions

1.1 Ex­is­ten­tial Risk

Global ex­is­ten­tial risks are those which threaten the pre­ma­ture ex­tinc­tion of earth origi­nat­ing life (Bostrom 2002) or which threaten “the per­ma­nent and dras­tic re­duc­tion of it’s po­ten­tial for de­sir­able fu­ture de­vel­op­ment” (Cot­ton-Bar­ratt & Ord 2015). As such, not all ex­is­ten­tial risks pose the dan­ger of ex­tinc­tion. Ir­re­versible global to­tal­i­tar­i­anism is of­ten con­sid­ered as an ex­is­ten­tial risk too; be­cause even with­out pos­ing any ex­tinc­tion risk, it has the ca­pac­ity to ir­re­versibly de­stroy or per­ma­nently lessen a great level of hu­man­ity’s po­ten­tial (Ord 2020).

1.2 Risk Fac­tors & Se­cu­rity Factors

A risk fac­tor is a situ­a­tion which makes the oc­cur­rence of an ex­is­ten­tial risk more likely. An ex­am­ple of this kind is in­ter­na­tional con­flict, which by it­self offers lit­tle like­li­hood of ex­tinc­tion; but which may dras­ti­cally raise the odds of nu­clear war, and thus the odds of ex­is­ten­tial catas­tro­phe. Ex­is­ten­tial risks and ex­is­ten­tial risk fac­tors are not always mu­tu­ally ex­clu­sive; some con­sider cli­mate change to be both an ex­is­ten­tial risk in it­self, and also a risk fac­tor which might in­crease the dan­ger from other kinds of ex­is­ten­tial risks (Tor­res, 2016).

A se­cu­rity fac­tor is the op­po­site of a risk fac­tor. It is some­thing which low­ers an­other ex­is­ten­tial risk. For ex­am­ple, good in­ter­na­tional gov­er­nance may be a se­cu­rity fac­tor which lessens the chance of nu­clear war.

Just as ac­tion to avoid ex­is­ten­tial risk is cru­cial, deal­ing with risk fac­tors can be just as im­por­tant, or in some cases, even more im­por­tant than deal­ing with the risks them­selves (Ord 2020). For ex­am­ple, if the chance of a par­tic­u­lar X-Risk oc­cur­ring is 10%, but a risk fac­tor brings this chance up to 90%, it may end up be­ing more cost effec­tive to ad­dress the risk fac­tor be­fore ad­dress­ing the risk it­self. This is not always the case, and work­ing on risk fac­tors might be less cost effec­tive in some cases, but there can be strong jus­tifi­ca­tion for work­ing on alle­vi­at­ing ex­is­ten­tial risk fac­tors when it is cost effec­tive to do so.

This pa­per seeks to iden­tify and out­line the dan­ger and like­li­hood of a new and un­no­ticed ex­is­ten­tial risk fac­tor.

2. Brain-Com­puter Interfaces

2.1 Out­line of Cur­rent Brain-Com­puter Interfaces

A brain-com­puter in­ter­face (or BCI), is an in­ter­face be­tween a brain and an ex­ter­nal de­vice. Cer­tain forms of BCIs already ex­ist; the term refers to a range of tech­nolo­gies, used for a num­ber of pur­poses. At pre­sent, the most well known com­mer­cial uses of BCIs in­clude re­cov­er­ing lost senses, as with cochlear im­plants used to re­store hear­ing and reti­nal im­plants to re­store sight (Anu­pama et al 2012). How­ever, BCIs have a vastly broader set of uses which already ex­ist as ei­ther in-use med­i­cal tech­nolo­gies or as well cor­rob­o­rated re­search pro­to­types. In this sec­tion, we will out­line a few of these uses to give an idea of the cur­rent and near term scope of the tech­nol­ogy.

For the pur­poses of our ex­pla­na­tion, there are two broad func­tions of BCIs. The first kind of BCIs are able to read neu­ral ac­tivity, record it, in­ter­pret it, send it, and use it for a va­ri­ety of pur­poses. The sec­ond kind of BCIs are able to write to the brain. They are able to in­fluence and mod­ify brain ac­tivity; stim­u­lat­ing or sup­press­ing var­i­ous re­sponses by us­ing skull mounted micro-elec­trodes, or by us­ing less in­va­sive, tran­scra­nial elec­tri­cal stim­u­la­tion. Th­ese two types could be com­bined and used to­gether, though for clar­ity we will re­fer to them as type 1 and type 2 BCIs, so as to differ­en­ti­ate func­tion.

Type 1 BCIs are able to read neu­ral data, but also re­port and send this data for a num­ber of pur­poses. Th­ese have already been used to trans­late speech from neu­ral pat­terns in real time (Alli­son et al 2007; Guen­ther et al 2009; Moses et al 2019), and to de­tect pos­i­tive and nega­tive emo­tional states from neu­ral pat­terns (Wu et al 2017). It is ex­pected that near-term BCIs of this kind will be able to de­tect in­ten­tional de­cep­tion, de­tect even sub­con­scious recog­ni­tion, and de­tect more pre­cise and com­plex thought con­tent (Evers and Sig­man 2013; Bunce et al 2005; Bel­l­man et al 2018; Roelfsema, Denys & Klink 2018). There are many prac­ti­cal uses of record­ing and in­ter­pret­ing neu­ral data. So far, BCIs have been used in pri­mates to al­low them to con­trol pros­thetic limbs and smart de­vices with thought, by send­ing men­tal com­mands di­rectly to the rele­vant de­vice (Moore 2003; Car­mena et al 2003; Ifft 2013). Th­ese same tech­niques have also been used to as­sist peo­ple who are para­plegic or quadriplegic by pro­vid­ing them with a neu­ral shunt which records mes­sages from the brain and sends these mes­sages di­rectly down to where the mus­cles are ac­ti­vated, al­low­ing pa­tients to use pre­vi­ously dis­abled limbs (Moore 2003). Many com­pa­nies also have the longterm goal of al­low­ing users to men­tally trans­mit mes­sages to other BCI users, al­low­ing silent com­mu­ni­ca­tion with only a thought (Kotch­etkov et al, 2010).

The uses of Type 2 BCIs are even more varied. Many uses are ther­a­peu­tic. Deep brain stim­u­la­tion for ex­am­ple has used neu­ral stim­u­la­tion to treat var­i­ous dis­abil­ities and con­di­tions; in­clud­ing Park­in­son’s dis­ease (Deuschl 2005; Per­l­mut­ter 2006). Similar tech­niques have been used to alle­vi­ate di­s­or­ders such as OCD (Abel­son et al 2005; Green­berg 2006), and have been sug­gested as po­ten­tial fu­ture treat­ments for con­di­tions like alzheimers and de­pres­sion (Lax­ton 2013; May­berg et al 2005), and even to re­store func­tion in those with mo­tor dis­abil­ity af­ter a stroke (Gu­lati et al 2015).

Through deep brain stim­u­la­tion, con­trol of phys­i­cal pain re­sponses is also a pos­si­bil­ity; Such tech­niques have been used to alle­vi­ate chronic pain (Ku­mar et al 1997; Bit­tar et al 2005a), treat phan­tom limb syn­drome (Bit­tar et al 2005b), aug­ment mem­ory (Suthana 2012; Ham­ini et al 2008) and more. Just as BCIs can cur­rently sup­press pain, pain re­sponses can also be stim­u­lated for a va­ri­ety of pur­poses, from in­ter­ro­ga­tion, to in­cen­tivi­sa­tion, to pun­ish­ment. Similarly, BCIs are already able to ar­tifi­cially stim­u­late or sup­press emo­tional re­ac­tions (Del­gado 1969; Roelfsema & Klink 2018). Th­ese are just a few of the cor­rob­o­rated func­tions of BCIs. In fu­ture, it has been sug­gested that BCIs could be used as a pos­si­ble treat­ment for crav­ings and ad­dic­tions, and as a way to al­ter in­ter­nal drives and re­wards sys­tems (Maz­zoleni & Pre­v­idi 2015; Halpern, 2008).

“Con­sider eat­ing a choco­late cake. While eat­ing, we feed data to our cog­ni­tive ap­para­tus. Th­ese data provide the en­joy­ment of the cake. The en­joy­ment isn’t in the cake per se, but in our neu­ral ex­pe­rience of it. De­cou­pling our sen­sory de­sire from the un­der­ly­ing sur­vival pur­pose [nu­tri­tion] will soon be within our reach.”—Mo­ran Cerf, Pro­fes­sor at North­west­ern Univer­sity, Em­ployee at Neu­ral­ink.

2.2 Fu­ture Brain-Com­puter Interfaces

There is sig­nifi­cant re­search and de­vel­op­ment be­ing done on BCIs to ex­pand their ca­pa­bil­ities, and to make BCIs or­ders of mag­ni­tude cheaper, more pre­cise, less in­va­sive, and more ac­cessible to the broader pop­u­la­tion. Com­pa­nies cur­rently work­ing on de­vel­op­ing cheap, pub­li­cly ac­cessible ad­vanced BCIs in­clude Face­book (Con­s­tine, 2017), Ker­nel (Ker­nel 2020; Statt 2017), Paradromics and Cortera (Re­gal­ado 2017), and Neu­ral­ink (Musk 2019). DARPA, the re­search arm of the US mil­i­tary, is fund­ing sig­nifi­cant re­search in this di­rec­tion (DARPA 2019), as is the Chi­nese gov­ern­ment (Tucker 2018).

The po­ten­tial uses of BCIs are well cor­rob­o­rated. The pri­mary difficulty at pre­sent is cost, pre­ci­sion, and in­va­sive­ness. With so many com­pa­nies and gov­ern­ments work­ing on this prob­lem, it is likely that these bar­ri­ers will quickly fall.

2.3 Not all BCIs In­volve ‘Hu­man­ity’ Scale Risk

As a point of clar­ifi­ca­tion, this pa­per does not ar­gue that all BCIs act as an ex­is­ten­tial risk fac­tor. It seems in­cred­ibly un­likely that cochlear im­plants have any im­pact on the like­li­hood of any ex­is­ten­tial risk. How­ever, we do ar­gue that cer­tain kinds of more ad­vanced BCI may be ex­tremely dan­ger­ous, and may dras­ti­cally raise the risk of long-last­ing global to­tal­i­tar­i­anism.

2.4 Cur­rent Liter­a­ture on Risks from BCIs

2.4.1 Pre­vi­ously Iden­ti­fied Risks

The cur­rent liter­a­ture on global ex­is­ten­tial risk from BCIs is scarce. The vast ma­jor­ity of the liter­a­ture on risk from BCI has fo­cused on im­pacts at a very low scale. Such low-scale risks that have been con­sid­ered in­clude sur­gi­cal risk from op­er­a­tions, pos­si­ble health re­lated side-effects such as al­tered sleep qual­ity, risk of ac­ci­den­tal per­son­al­ity changes, and the pos­si­bil­ity of down­stream men­tal health im­pacts or other un­known effects from BCI use (Bur­well, Sam­ple & Racine 2017). Po­ten­tial threats to in­di­vi­d­ual pri­vacy have also been iden­ti­fied; speci­fi­cally the risk of BCIs ex­tract­ing in­for­ma­tion di­rectly from the brain of users (Klein et al 2015).

At a higher scale, Bryan Ca­plan (2008) suc­cess­fully iden­ti­fied ‘brain scan­ning tech­nol­ogy’ as a fac­tor that may im­pact ex­is­ten­tial risk at some point in the next thou­sand years by as­sist­ing with the main­te­nance of dic­ta­tor­ships. How­ever, Ca­plan fo­cuses only on risk within the next mil­len­nium, and does not con­sider the high po­ten­tial for this to oc­cur in a far shorter time frame; in par­tic­u­lar, within the next hun­dred years. He also only briefly men­tions brain scan­ning as a tech­nol­ogy, and does not con­sider the risk from brain scan­ning tech­nol­ogy be­ing pre­sent and ac­tive in all cit­i­zens at all times. Such wide­spread use is a stated goal of mul­ti­ple cur­rent BCI com­pa­nies. Fi­nally, Ca­plan did not con­sider the full depth of the im­pact of BCIs—only men­tion­ing the ca­pac­ity of brain scan­ning to in­crease the depth of surveillance, while ig­nor­ing the ex­is­ten­tial risk posed by the wide­spread use of brain stim­u­la­tion.

2.4.2 Cy­ber­se­cu­rity and Coercion

A fi­nal risk that has been iden­ti­fied in prior liter­a­ture is cy­ber­se­cu­rity; though prior liter­a­ture has pri­mar­ily fo­cused on the threat to in­di­vi­d­u­als. Speci­fi­cally this has been dis­cussed in re­la­tion to vuln­er­a­bil­ities in in­for­ma­tion se­cu­rity, fi­nan­cial se­cu­rity, phys­i­cal safety, and phys­i­cal con­trol (Ber­nal et al 2019a). BCIs, just like com­put­ers, are vuln­er­a­ble to be ma­nipu­lated by mal­i­cious agents. BCIs and brain scan­ning offer an un­prece­dented level of per­sonal in­for­ma­tion, pass­words, as well as data about a user’s thoughts, ex­pe­rience, mem­o­ries and at­ti­tudes, and thus offer an at­trac­tive ter­rain for at­tack­ers. It is likely that se­cu­rity flaws will be used by mal­i­cious ac­tors to as­sist with cy­ber­crime. Fur­ther Pre­vi­ously iden­ti­fied risks here in­cludes risk of iden­tity theft, pass­word hack­ing, black­mail, and even com­pro­mis­ing the phys­i­cal in­tegrity of tar­gets who rely on BCIs as a med­i­cal de­vice (Ber­nal et al 2019b). The use of deep brain stim­u­la­tion for co­er­cion or con­trol of BCI users is also a pos­si­ble source of risk (Deme­tri­ades et al 2010). Cor­rob­o­rated pos­si­bil­ities here in­clude con­trol of move­ment, evok­ing emo­tions, evok­ing pain or dis­tress, evok­ing de­sires, and im­pact­ing mem­o­ries and think­ing pro­cesses; and these are just the ear­liest dis­cov­ered ca­pa­bil­ities (Del­gado 1969). How­ever, past pa­pers have ex­clu­sively fo­cused on this as a risk to in­di­vi­d­u­als; that in­di­vi­d­u­als may be sab­o­taged, surveilled, robbed, harmed, or con­trol­led. Past re­search has not yet ex­plored the risk pro­vided to hu­man­ity as a whole.

This pa­per will seek to take the first steps to fill that gap, and will out­line the risks that BCIs provide at a broader, global scale; ad­dress­ing the risk they pose to the fu­ture of all of hu­man­ity.

2.5 Higher Scale Risks: BCI as a Risk Fac­tor for Totalitarianism

2.5.1 Risk From Neu­ral Scan­ning: Abil­ity to Surveil Subjects

Dissent from within is one of the ma­jor weak­nesses of to­tal­i­tar­ian dic­ta­tor­ships. BCIs offer a pow­er­ful tool to miti­gate this weak­ness. In­creases in abil­ities for surveillance would make it eas­ier to iden­tify and root out dis­sent, or root out skep­tics who might be­tray the party, and thus would make it eas­ier to main­tain to­tal­i­tar­ian con­trol. While con­ven­tional surveillance may al­low for a high level of mon­i­tor­ing, and track­ing of cit­i­zens be­havi­our and ac­tions, it pro­vides no way for a dic­ta­tor to peer in­side the minds of their sub­jects. Be­cause of this, the difficulty of iden­ti­fy­ing the at­ti­tudes of care­ful defec­tors re­mains high. BCIs provide an un­prece­dented threat here. Surveillance through already ex­ist­ing meth­ods may fail to ex­pose some threats to a to­tal­i­tar­ian regime, such as party mem­bers who care­fully hide their skep­ti­cism. But BCI based surveillance would have no such flaw.

The level of in­tru­sion here is po­ten­tially quite se­vere. With the ad­vance­ment of BCIs, it is highly likely that in the near fu­ture we will see a rapid ex­pan­sion in the abil­ity to ob­serve the con­tents of an­other’s mind. Some re­searchers claim that ad­vanced BCIs will have ac­cess to more in­for­ma­tion about the in­ten­tions at­ti­tudes, and de­sires of a sub­ject than those very sub­jects do them­selves, sug­gest­ing that even sub­con­scious at­ti­tudes, sub­con­scious recog­ni­tion, as well as in­ten­tional de­cep­tion and hid­den in­ten­tions will be de­tectable by BCIs (Evers and Sig­man 2013) (Bunce et al 2005). Already, BCIs are able to de­tect un­con­scious recog­ni­tion of ob­jects that a sub­ject has seen, but can­not con­sciously re­mem­ber see­ing (Bel­l­man et al 2018).

Others have even sug­gested that by more pre­cisely record­ing the ac­tivity of a larger num­ber of neu­rons, fu­ture BCIs will be able to re­veal not just per­cep­tions and words, but emo­tions, thoughts, at­ti­tudes, in­ten­tions, and ab­stract ideas like recog­ni­tion of peo­ple or con­cepts (Roelfsema, Denys & Klink 2018). At­ti­tudes to­wards ideas, peo­ple, or or­gani­sa­tions could be dis­cov­ered by cor­re­lat­ing emo­tions to their as­so­ci­ated thought con­tent, and dic­ta­tor­ships could use this to dis­cover at­ti­tudes to­wards the state, poli­ti­cal figures, or even ideas. This would al­low de­tec­tion of dis­sent with­out fail, and al­low a dic­ta­tor to quell re­bel­lion be­fore a re­bel­lious thought is even shared.

Some might hope for BCIs which do not have this level of ac­cess, but ac­cess­ing and record­ing men­tal states is an fun­da­men­tal and un­avoid­able fea­ture of many BCIs. In or­der to achieve their de­sired func­tions, many BCIs need a clear way to read neu­ral data. Without sig­nifi­cant neu­ral data they sim­ply can­not func­tion—it is im­pos­si­ble to trans­late neu­ral data to ex­ert some func­tion if one doesn’t have ac­cess to that neu­ral data. Brain stim­u­la­tors and BCIs are speci­fi­cally de­signed to al­low this kind of ac­cess; it is cru­cial for the effec­tive func­tion­ing of the de­vice (Ienca, 2015). It is of course pos­si­ble that BCIs made by some com­pa­nies will be ex­clu­sively tar­geted to cer­tain sec­tions of the brain, for ex­am­ple, only tar­get­ing ar­eas as­so­ci­ated with speech, and not tar­get­ing other ar­eas as­so­ci­ated with emo­tions or thought. This is con­ceiv­able, though it is not clear that all com­pa­nies and coun­tries would do the same. Fur­ther­more, the util­ity gained by ex­pand­ing to other ar­eas of the brain be­yond the speech cen­tre means it is highly doubt­ful the tech­nol­ogy will re­main re­strained in­definitely.

Fur­ther­more, it is likely that BCIs will be cre­ated by com­pa­nies, which have strong fi­nan­cial in­cen­tive to record the neu­ral states of users, if only to give them more in­for­ma­tion with which to im­prove their own tech­nol­ogy. This in­for­ma­tion could be req­ui­si­tioned by gov­ern­ments, as is fre­quently done to tech com­pa­nies at pre­sent—even in demo­cratic coun­tries. Fur­ther en­hanc­ing this prob­lem, pri­vacy laws have a his­tory of strug­gling to keep pace with tech­nolog­i­cal ad­vance­ments. In more au­thor­i­tar­ian coun­tries, neu­ral data might be trans­mit­ted di­rectly to state records, and the preser­va­tion of pri­vacy may not be at­tempted at all.

In essence, BCIs al­low an easy and ac­cu­rate way to de­tect thoughtcrime. For the first time, it will be pos­si­ble for states to surveil the minds of its cit­i­zens. Deep surveillance of this kind would in­crease the like­li­hood that to­tal­i­tar­ian dic­ta­tor­ships would last in­definitely.

2.5.2 Risks from Brain Stim­u­la­tion: Abil­ity to Con­trol Subjects

In ad­di­tion to record­ing neu­ral ac­tivity, there is an even greater threat which has not been con­sid­ered as an ex­is­ten­tial risk fac­tor in any prior liter­a­ture. In ad­di­tion to read­ing brain ac­tivity, BCIs are able to in­ten­tion­ally in­fluence the brain. In par­tic­u­lar, fu­ture BCIs will be able to rewire plea­sure and pain re­sponses, and al­low us to in­ten­tion­ally stim­u­late or in­hibit emo­tional re­sponses, en masse. Where this is done con­sen­su­ally, and is de­sired, this may be of some benefit. How­ever, noth­ing about this tech­nol­ogy guaran­tees con­sent.

In ad­di­tion to be­ing able to iden­tify dis­si­dent el­e­ments more effec­tively than ever (due to in­creased surveillance), BCI will also pow­er­fully in­crease the abil­ity of states to con­trol their sub­jects, and their abil­ity to main­tain that con­trol in­definitely. In such a situ­a­tion, iden­ti­fi­ca­tion of dis­si­dents would no longer be nec­es­sary, as a state could guaran­tee that dis­si­dent thought would be a phys­i­cal im­pos­si­bil­ity. Finely honed BCI’s can already trig­ger, and as­so­ci­ate, cer­tain emo­tions or stim­uli with cer­tain con­cepts (Roelfsema, Denys & Klink 2018). This could be used to man­dat­ing de­sir­able emo­tions to­wards some ideas, or make un­de­sir­able emo­tions liter­ally im­pos­si­ble. Though this pos­si­bil­ity has been dis­cussed in liter­a­ture for it’s ther­a­peu­tic uses, such as trig­ger­ing stim­u­la­tion in or­der to re­spond to nega­tive ob­ses­sive thoughts (nul­lify­ing nega­tive emo­tions caused by such thoughts) there is huge po­ten­tial for mi­suse. A mal­i­cious con­trol­ler could stim­u­late loy­alty or af­fec­tion in re­sponse to some ideas, or even for spe­cific or­gani­sa­tions and peo­ple; and could stim­u­late ha­tred in re­sponse to oth­ers. It could also in­hibit cer­tain emo­tions, so that cit­i­zens wouldn’t be phys­i­cally able to feel anger at the state. The abil­ity to trig­ger and sup­press emo­tional con­tent with BCIs has already ex­isted for years (Del­gado 1969). Com­bined with com­plex and de­tailed read­ing of thought con­tent, this is a highly dan­ger­ous tool.

Some might ar­gue that dis­si­dent ac­tion may be pos­si­ble even with an out­side state con­trol­ling ones emo­tional af­fect. This is highly de­bat­able, but even with­out any con­trol of this emo­tional con­tent, the risk from BCIs is still ex­treme. BCIs could con­di­tion sub­jects to re­in­force cer­tain be­havi­our (Tsai et al 2009), or could be used to stim­u­late aver­sion to in­hibit un­de­sired be­havi­our (Lam­mel et al 2012), or stim­u­late the pain or fear re­sponse (Del­gado 1969), and cause in­tense and un­end­ing pain in re­sponse to cer­tain thoughts or ac­tions—or even in re­sponse to a lack of co­op­er­a­tion. Even with­out con­trol­ling emo­tional af­fect, the state could pun­ish dis­si­dent thoughts in real time, and make con­sid­er­ing re­sis­tance a prac­ti­cal im­pos­si­bil­ity. This is a pow­er­ful ad­van­tage for to­tal­i­tar­ian states, and a strong rea­son for au­thor­i­tar­ian states to be­come more to­tal­i­tar­ian. In ad­di­tion to surveillance, it al­low as way to po­lice the pop­u­la­tion and gain full co­op­er­a­tion from cit­i­zens in a way that (once es­tab­lished in all cit­i­zens) could not be re­sisted against. Ma­chine learn­ing pro­grams scan­ning state databases of neu­ral ac­tivity could de­tect thought pat­terns to­wards the state which are deemed nega­tive, and pun­ish this in real time. Or, if the state is more effi­cient, it could sim­ply stim­u­late the brains of sub­jects to en­force habits, in­crease loy­alty, de­crease a sub­ject’s anger, or in­crease their pas­sivity (Lam­mel 2012; Tsai et al 2009). Even high level dis­sent or threat of coup would be vir­tu­ally im­pos­si­ble in a to­tal­i­tar­ian state of this kind; and it’s longterm in­ter­nal se­cu­rity would be as­sured.

This is a tech­nol­ogy which fun­da­men­tally em­pow­ers to­tal­i­tar­i­anism. It al­lows a way to po­lice the pop­u­la­tion and gain full co­op­er­a­tion from cit­i­zens in a way which could not be re­sisted against. This is be­cause even con­sid­er­ing the idea of re­sis­tance or hav­ing emo­tions of dis­dain to­wards the state could be de­tected and rewired (or pun­ished) in real time. At worst, the brain could be re-in­cen­tivised, with er­rant emo­tions turned off at the source, so that dis­sent­ing at­ti­tudes are un­able to ever form.

BCIs also offer an easy way to in­ter­ro­gate dis­si­dents and guaran­tee their co­op­er­a­tion in helping to find other dis­si­dent camps—which might be oth­er­wise im­pos­si­ble. In past re­sis­tances, cer­tain dis­si­dents have been con­sid­ered near-im­pos­si­ble to com­pletely wipe out due fea­tures of ter­rain mak­ing it im­pos­si­ble to lo­cate them in a cost effec­tive way. If the gov­ern­ment is able to ac­cess and forcibly ap­ply BCIs, this would be a dra­mat­i­cally weaker ob­sta­cle. Dissen­ters might nor­mally lie or not co­op­er­ate; but with BCIs, they sim­ply need to be im­planted and rewired. Then they would be as loyal and co­op­er­a­tive as any other, and could ac­tively lead the state to their pre­vi­ous al­lies. Even un­con­strained defec­tors could not be fully trusted as they may one day be con­trol­led by the state. Another is­sue for the longterm sur­vival of to­tal­i­tar­ian dic­ta­tor­ships is coups or over­throws from within, as cit­i­zens or party offi­cials are of­ten tempted by differ­ent con­di­tions in other states. With BCIs, the loy­alty of reg­u­lar cit­i­zens and even party offi­cials could be as­sured. In cur­rent dic­ta­tor­ships wiping out dis­si­dents (par­tic­u­larly non­vi­o­lent dis­si­dents) of­ten has a sig­nifi­cant so­cial cost which can dele­gi­t­imise and desta­bil­ise regimes (Sharp 1973). A dic­ta­tor­ship whose cit­i­zens are all im­planted with BCIs would not pay this so­cial cost, or run such a risk of over­throw. At pre­sent, when dic­ta­tors crack down it can cause ri­ots and re­sis­tance, which can cause dic­ta­tor­ships to fall. With BCI’s, gov­ern­ments will not need to ap­pease their cit­i­zens at all to main­tain loy­alty. They need only turn up the dial.

It has long been ar­gued that tech­nolo­gies can in­cline us to­wards cer­tain sys­tems of gov­ern­ment (Or­well 1945) (Huxley 1946) (Martin 2001). BCIs are one of these tech­nolo­gies. By mak­ing the surveillance and polic­ing of hu­mans much eas­ier, they in­cline us to­wards to­tal­i­tar­i­anism, and al­low for a kind of to­tal­i­tar­i­anism the could be sta­ble in­definitely. They do this by mak­ing the iden­ti­fi­ca­tion and con­trol of dis­si­dents (or even the first sparks of dis­si­dent thought) dras­ti­cally eas­ier, and by giv­ing states the abil­ity to turn this off at the source. Notably, Ca­plan (2008) pro­posed that for to­tal­i­tar­i­anism to be sta­ble (in the ab­sence of BCIs) it would need to be a global phe­nom­ena; so the cit­i­zens of a to­tal­i­tar­ian gov­ern­ment would not be able to be tempted by other kinds of so­cial sys­tem. With the de­vel­op­ment of BCIs, this is no longer a nec­es­sary crite­ria for sta­ble to­tal­i­tar­i­anism.

2.6 Strate­gic Im­pli­ca­tions for Risk of Global Totalitarianism

In this sec­tion we ex­plore some global strate­gic im­pli­ca­tions of BCIs. In par­tic­u­lar, that BCIs al­low to­tal­i­tar­ian regimes to be sta­ble over the longterm, even with­out re­quiring global to­tal­i­tar­i­anism. We also ar­gue that BCIs make au­thor­i­tar­ian regimes more likely to be­come to­tal­i­tar­ian in the first place, and we ex­plore the dan­ger­ous strate­gic equil­ibrium that they cre­ate. Essen­tially, BCIs make it eas­ier for to­tal­i­tar­i­anism to oc­cur, eas­ier for it to be es­tab­lished globally, and eas­ier for it to last in­definitely.

To­tal­i­tar­ian states may fail for a few rea­sons. Con­quest by ex­ter­nal en­e­mies is a dan­ger, and since to­tal­i­tar­ian states tend to stag­nate more than more in­no­va­tive liberal states, this may be a dan­ger that grows over time. In­ter­nal dan­gers oc­cur too; cit­i­zens may choose to rebel af­ter com­par­ing their lives to more pros­per­ous coun­tries in the out­side world. Violent and non­vi­o­lent re­sis­tances have been able to over­throw even harsh au­thor­i­tar­ian regimes (Chenoweth 2011), and at least one to­tal­i­tar­ian state has been over­thrown by pop­u­lar up­ris­ing (speci­fi­cally, the So­cial­ist Repub­lic of Ro­ma­nia).

It has been sug­gested that the pres­ence of suc­cess­ful liberal coun­tries may tempt defec­tion among the mem­bers of au­thor­i­tar­ian and to­tal­i­tar­ian coun­tries; main­tain­ing the morale of cit­i­zens and the in­ner elite is a pri­mary is­sue. Or­well (1945) and Ca­plan (2008) both pro­pose that global to­tal­i­tar­i­anism would al­low a to­tal­i­tar­ian state to es­cape these risks of re­bel­lion, as there would be no bet­ter con­di­tion for sub­jects to be tempted by or to com­pare their lives to. How­ever, global to­tal­i­tar­i­anism is not nec­es­sary; BCIs can disarm these is­sues. Not only is iden­ti­fi­ca­tion of dis­sent eas­ier; the ca­pac­ity for dis­sent can be en­tirely re­moved such that it never even be­gins, and loy­alty and high morale can be all but guaran­teed. Typ­i­cally, it is hard to main­tain com­mit­ment to to­tal­i­tar­ian ide­olo­gies when free so­cieties de­liver higher lev­els of wealth and hap­piness with lower lev­els of bru­tal­ity and op­pres­sion. BCIs could neu­tral­ise this prob­lem, mak­ing temp­ta­tion phys­i­cally im­pos­si­ble, loy­alty guaran­teed, and regimes sta­ble for the longterm.

Global to­tal­i­tar­i­anism would no longer be re­quired for a regime to be sus­tain­able in the longterm. In­di­vi­d­ual to­tal­i­tar­ian coun­tries could be sta­ble in­ter­nally due to BCI. Fur­ther­more, they could also be sta­ble to ex­ter­nal threats through the de­vel­op­ment of nu­clear weapons, which pow­er­fully dis­cour­age war, and provide se­cu­rity from for­eign na­tions. Be­ing safe from both in­ter­nal and ex­ter­nal threats would have sig­nifi­cant im­pacts on the lifes­pan of a to­tal­i­tar­ian coun­try.

A sec­ond im­pact of BCIs is that con­ven­tional dic­ta­tor­ships may be far more likely to be­come to­tal­i­tar­ian, as BCIs would make it easy and ad­van­ta­geous to do so. In par­tic­u­lar, if there is an easy, cheap, and effec­tive way to iden­tify and re­move all op­por­tu­ni­ties for dis­sent in a pop­u­la­tion then this is a pow­er­ful ad­van­tage for a dic­ta­to­rial regime. Sur­vival would be a pow­er­ful rea­son to de­scend to to­tal­i­tar­i­anism. There­fore, BCIs may in­crease not just the longevity of to­tal­i­tar­ian states, but also the like­li­hood that they oc­cur in the first place.

Fi­nally, this also cre­ates a wor­ry­ing strate­gic situ­a­tion which may in­crease the like­li­hood of to­tal­i­tar­i­anism en­trench­ing it­self globally. With BCIs, to­tal­i­tar­ian coun­tries would al­most never fall from in­ter­nal threats. Mean­while, demo­cratic coun­tries which do not brain­wash their cit­i­zens may still at some point de­gen­er­ate to a more au­thor­i­tar­i­anism form of gov­ern­ment—at least for a short pe­riod. Demo­cratic gov­ern­ments have rarely lasted more than a few cen­turies in his­tory, and have of­ten tem­porar­ily slid into dic­ta­tor­ship or au­thor­i­tar­i­anism. BCI tech­nol­ogy guaran­tees that if a coun­try falls to to­tal­i­tar­i­anism it will be per­ma­nent; as BCIs will en­sure that they can main­tain that sta­ble state in­definitely. At pre­sent, democ­ra­cies can col­lapse to dic­ta­tor­ship, and dic­ta­tor­ships can have rev­olu­tions and rise to democ­racy. With BCIs, democ­ra­cies can still col­lapse, but dic­ta­tor­ships are able to last for­ever. Of course, coun­tries can also be threat­ened from out­side, but with the ad­vent of nu­clear weapons, ex­ter­nal mil­i­tary con­quest is a much less vi­able op­tion. In short, with a com­bi­na­tion of BCIs and nu­clear weapons, a to­tal­i­tar­ian coun­try could be se­cure from within, and from ex­ter­nal threats as well.

This is a dan­ger­ous strate­gic equil­ibrium, as it means that free coun­tries will still even­tu­ally fall, as they do at pre­sent, but when they do they will not be able to climb back out. Democ­ra­cies could col­lapse to dic­ta­tor­ship, but dic­ta­tor­ships could never rise from that state. In a world where democ­ra­cies are mor­tal but dic­ta­tor­ships live for­ever, the global sys­tem is in­evitably in­clined to­wards to­tal­i­tar­i­anism.

3. Risk Estimates

3.1 Prob­a­bil­ity Es­ti­mates of Ex­is­ten­tial Risk from BCI

This sec­tion will offer a con­ser­va­tive fermi es­ti­mate of the ex­is­ten­tial risk from BCI in the next 100 years. We use the same frame­work used by Toby Ord to as­sess other ex­is­ten­tial risks (2020). The fol­low­ing sec­tions (3.2 to 3.6) will un­pack and jus­tify the es­ti­mates taken in this sec­tion.

Ord out­lines two meth­ods of be­ing con­ser­va­tive with risks. One method is to un­der­rate our as­sump­tions such that we don’t over­es­ti­mate the like­li­hood of a risk. Another way is over­rat­ing like­li­hood such that we don’t ac­ci­den­tally act in a nega­tive way. When guid­ing ac­tion, the sec­ond ap­proach is far more use­ful, as it is more pru­dent and more likely to avoid catas­trophic failure. How­ever, to make the strongest case pos­si­ble, in this pa­per I will be us­ing the sec­ond ap­proach; and will show that even with our most con­ser­va­tive, lower-bound as­sump­tions, the risk from BCI is sig­nifi­cant. In fact, our risk es­ti­mate would need to be al­most an or­der of mag­ni­tude lower (~10X) to be on par with the prob­a­bil­ity Ord as­signs to ex­is­ten­tial risk from nu­clear war in the next 100 years (ap­prox­i­mately 0.1%) (2020).

The es­ti­mates pro­vided here are not in­tended to be ex­actly cal­ibrated, but are meant to es­ti­mate the risk within an or­der of mag­ni­tude. Th­ese num­bers should also clar­ify the di­alogue, and al­low fu­ture crit­i­cisms to ex­am­ine in more de­tail which as­sump­tions are over­rated or un­der­rated.

Our prob­a­bil­ity es­ti­mate is bro­ken down into five fac­tors;

D: Like­li­hood of de­vel­op­ment and mass pop­u­lari­sa­tion of X-risk-rele­vant BCI tech­nol­ogy.
E: Like­li­hood that some coun­tries will be­gin to use BCIs to es­tab­lish to­tal­i­tar­ian con­trol of their pop­u­la­tion.
S: Like­li­hood of global spread of to­tal­i­tar­i­anism.
R: Like­li­hood of to­tal reach.
L: Like­li­hood of in­definite regime sur­vival.
= To­tal Ex­is­ten­tial Risk
D * E * S * R * L = 70% * 30% * 5% * 90% * 90%
= 0.0085 = 1%

This es­ti­mate is based on the fol­low­ing val­ues, which are ex­plored in de­tail in sec­tions 3.2 to 3.6.

Like­li­hood of de­vel­op­ment (D): 70%
Like­li­hood that coun­tries be­gin us­ing BCIs to con­trol the pop­u­la­tion (E): 30%
Like­li­hood of global spread (S): 5%
Like­li­hood of to­tal reach (as­sum­ing global spread is as­sured) (R): 90%
Like­li­hood of last­ing in­definitely (L): 90%

This is al­most 10x the prob­a­bil­ity Ord as­signs to ex­is­ten­tial risk from nu­clear war by 2100 and slightly less than 1/​10th of the risk that he as­signs to AGI (Ord, 2020).

Sup­pos­ing these over­all es­ti­mates are too high by an or­der of mag­ni­tude, then this risk would still be on par with ex­is­ten­tial risk from nu­clear war, and thus would still be a sig­nifi­cant risk that should be ad­dressed. If there is rea­son to con­fi­dently sug­gest that these es­ti­mates are off by two or­ders of mag­ni­tude or more, then there is a rea­son­able case for ig­nor­ing the risk from BCIs in favour of pri­ori­tis­ing other ex­is­ten­tial risks. How­ever, it seems un­likely that these es­ti­mates could rea­son­ably be off by two or­ders of mag­ni­tude or more.

But this es­ti­mate refers to risk over­all. What is rele­vant is not the over­all risk but the in­crease in risk due to BCI. Brian Ca­plan (2008) at­taches a risk of 5% in the next 1000 years to the de­vel­op­ment of global to­tal­i­tar­i­anism which would last at least 1000 years. This evens out to ap­prox­i­mately 0.5% per cen­tury, as­sum­ing the risk is evenly dis­tributed over the 1000 years.

Even as­sum­ing this most con­ser­va­tive lower bound es­ti­mates, our es­ti­mates in­di­cate that BCIs would al­most dou­ble the risk of global to­tal­i­tar­i­anism within the next 100 years. If the es­ti­mates are higher than this ex­treme-lower-bound es­ti­ma­tion, the ca­pac­ity of BCI to be a risk fac­tor may be sig­nifi­cantly greater.

A less con­ser­va­tive es­ti­mate may also be illus­tra­tive of a rea­son­able level of ex­pected risk.

D * E * S * R * L = 85% * 70% * 10% * 95% * 95%
= 0.054 = 5.4%

This less con­ser­va­tive es­ti­mate would mean an over­all ex­is­ten­tial risk of 5.4% in the next 100 years. This would rep­re­sent an in­crease of al­most 11X on the baseline to­tal­i­tar­ian risk given by Bryan Ca­plan, as­sum­ing his risk is evenly dis­tributed over 1000 years.

3.2 Like­li­hood of Development

Tech­nolog­i­cal de­vel­op­ment is no­to­ri­ously hard to pre­dict. So our method in this sec­tion is to keep pre­dic­tions in line with pre­vi­ous pre­dic­tions by ex­perts.

Es­ti­mates from ex­perts in the field are very clear; with the vast ma­jor­ity of BCI re­searchers sur­veyed (86%) be­liev­ing that some form of BCIs for healthy users will be de­vel­oped, and on mar­ket within 10 years, and with 64.1% be­liev­ing that BCI pros­the­ses would be de­vel­oped on mar­ket within the same time frame. (Nijboer et al 2011) Notably, in this sur­vey, only 1.4% claimed that BCIs for healthy users will never be mar­ketable and in use. The pre­dic­tions made by these re­searchers should be taken with some scrutiny; al­most ten years has al­most passed since this sur­vey was run, and BCI pros­thet­ics are not yet on mar­ket. How­ever, the op­ti­mism of ex­perts in the field is tel­ling about the level of progress in the field. The time frame seems more likely to be decades than cen­turies.

Of course, pre­sent BCIs are fairly rudi­men­tary, and one might ar­gue that ad­vanced mind read­ing might not be a pos­si­ble. How­ever, con­sen­sus among re­searchers is that this is post the­o­ret­i­cally pos­si­ble, and likely; a re­cent sur­vey of BCI re­searchers found uni­ver­sal con­sen­sus among par­ti­ci­pants that in the fu­ture, men­tal states will be able to be read (Evers & Sig­man 2013). Similarly, a sur­vey of soft­ware en­g­ineers who had been pre­sented with a work­ing BCI found a uni­ver­sally shared be­lief among par­ti­ci­pants that con­tents of the mind will some­day be read by ma­chines—though the sur­vey pro­vided no timeframe. This unifor­mity of be­lief was found de­spite the di­ver­gent views about the na­ture of the mind among par­ti­ci­pants (Mer­rill & Chuang 2018)

Fur­ther­more, there is sig­nifi­cant rea­son to be­lieve that the pri­mary is­sues hold­ing back com­mer­cial BCIs will soon be solved. Is­sues with BCIs at pre­sent in­clude scal­ing down tech­nol­ogy so it is small enough to be use­able, can tar­get more neu­rons, and can tar­get neu­rons more pre­cisely, as well as scal­ing down cost so it can be eas­ily re­pro­duced. The fact that there are mul­ti­ple well funded com­pa­nies work­ing at these goals, and mul­ti­ple na­tional gov­ern­ments work­ing at these goals, makes us be­lieve that the like­li­hood of de­vel­op­ment is rel­a­tively high (DARPA 2019; Kotch­etkov et al 2010; Tucker 2018). To re­in­force this, in 2019, the global BCI mar­ket was val­ued at $1.36 billion in 2019, but is pro­jected to reach 3.85 billion by 2027, grow­ing at 14.3% per year (Gaul 2020).

Fur­ther jus­tifi­ca­tion here can be found in the kind of strate­gic en­vi­ron­ment that this tech­nol­ogy is be­ing de­vel­oped in. Speci­fi­cally, it is a text­book case of a progress trap (Wright 2004). A progress trap is a situ­a­tion where there are mul­ti­ple small in­cen­tives en­courag­ing a cer­tain path (much like bait) which en­courage a cul­ture to fol­low a cer­tain course of de­vel­op­ment; in par­tic­u­lar, a di­rec­tion which is benefi­cial in the short term but catas­trophic in the longterm. In this case, we will be in­cen­tivised to de­velop BCIs ev­ery step of the way, as there will be strong re­wards for do­ing so; speci­fi­cally in the form of re­liev­ing dis­ease bur­den from dis­eases like alzheimers and Park­in­son’s dis­ease, as well as alle­vi­at­ing suffer­ing for am­putees/​im­prov­ing pros­thet­ics. There are sig­nifi­cant med­i­cal, moral and eco­nomic ad­van­tages to be had im­ple­ment­ing tech­nol­ogy which can alle­vi­ate that dis­ease bur­den. This will provide us con­tinual in­cen­tives to de­velop this tech­nol­ogy, but will lead us to a place with a new tech­nol­ogy that may dras­ti­cally in­crease the chance of ex­is­ten­tial risk.

In short, ex­is­ten­tial risk rele­vant tech­nol­ogy for mind read­ing and mind con­trol may not be de­vel­oped for mil­i­tary pur­poses, but due to valid eco­nomic and med­i­cal rea­sons. It will slowly evolve over time, solv­ing one prob­lem at a time, like Alzheimers and Park­in­son’s dis­ease. Each ad­vance­ment will seem log­i­cal, morally con­strained, and nec­es­sary. The im­me­di­ate con­se­quences of the tech­nol­ogy will be unas­sailable, and will make this tech­nol­ogy more difficult to crit­i­cise. The sec­ond or­der con­se­quences how­ever pre­sent se­ri­ous challenges to the fu­ture of hu­man­ity.

Given this cor­po­rate com­pe­ti­tion, the ex­pected in­crease in global mar­ket val­u­a­tion, the amount of cor­rob­o­rated pro­to­types of BCIs, the pres­ence of a progress trap, and the near con­sen­sus about pos­si­bil­ity of the tech­nol­ogy in sur­veys of re­searchers, we take the fairly con­ser­va­tive as­sump­tion that ad­vanced, X-risk-rele­vant BCIs will be eas­ier to de­velop, and more likely to be de­vel­oped this cen­tury, than AGI (Ar­tifi­cial Gen­eral In­tel­li­gence). Th­ese seems es­pe­cially likely con­sid­er­ing that we are un­cer­tain of even the pos­si­ble meth­ods by which AGI could be built. In terms of the prob­a­bil­ity we should at­tach to this claim, a 2011 sur­vey of ma­chine in­tel­li­gence re­searchers led to an as­sump­tion of 50% chance of de­vel­op­ment of gen­eral ar­tifi­cial in­tel­li­gence by 2100 (Sam­berg & Bostrom 2011). As such, we sug­gest 70% as an ex­treme lower bound es­ti­mate for the de­vel­op­ment of BCIs. The prob­a­bil­ity is likely much higher, how­ever we be­lieve this es­ti­mate to be a rea­son­able lower bound.

3.3 Like­li­hood that Some Govern­ments Will Be­gin to Use BCIs to Main­tain Control

Once the tech­nol­ogy is de­vel­oped and available, how likely is it that some gov­ern­ments will be­gin us­ing BCIs to con­trol their coun­try? Sce­nar­ios we are con­sid­er­ing here in­clude gov­ern­ments which be­gin us­ing BCIs to con­trol a wide swathe of it’s pop­u­la­tion ei­ther by forcibly im­plant­ing cit­i­zens with BCIs, or by over­rid­ing and con­trol­ling the be­havi­our of BCIs that have been im­planted con­sen­su­ally. To satisfy the crite­ria we have set out, the gov­ern­ment must be suc­cess­ful in do­ing this on a large scale; for ex­am­ple, con­trol­ling a sig­nifi­cant por­tion of the pop­u­la­tion (~20%). Sim­ply hav­ing a few dozen cit­i­zens with BCIs com­pro­mised would be a nega­tive event but is not suffi­cient scale for what we’re dis­cussing.

Prob­a­bil­ities here are difficult to es­ti­mate; how­ever, there are a few con­di­tions of this strate­gic en­vi­ron­ment which in­form our es­ti­mates.

Firstly, there will be a strong in­cen­tive for au­thor­i­tar­ian coun­tries to use BCIs to con­trol, surveil, and po­lice their pop­u­la­tion; and to take con­trol of BCIs which are cur­rently im­planted. The in­cen­tive lies in the fact that this would help to sta­bil­ise their regimes, guaran­tee loy­alty of their sub­jects (which is of­ten a ma­jor threat to dic­ta­to­rial regimes) iden­tify dis­si­dents, and crack down on or re­ha­bil­i­tate those dis­si­dents. The op­por­tu­nity to use BCIs to en­hance in­ter­ro­ga­tion may also make cer­tain kinds of de­cen­tral­ised re­sis­tance more difficult, as even former col­lab­o­ra­tors may be con­trol­led and used by the to­tal­i­tar­ian gov­ern­ment they pre­vi­ously fought against. BCIs would al­low for the perfec­tion of to­tal­i­tar­i­anism within a coun­try. To­tal­i­tar­ian, and even au­thor­i­tar­ian coun­tries will have a strong in­cen­tive to roll this out in their own so­ciety, as­sum­ing they are cost-effec­tive to build and use.

This is already be­ing seen to some ex­tent. Much of the work on BCI-based de­tec­tion of de­cep­tion has been performed by state spon­sored in­sti­tu­tions in China (Mun­yon 2018), which is cur­rently rated as an au­thor­i­tar­ian gov­ern­ment on the Global Democ­racy In­dex (EIU 2019). Com­bined with China’s his­tory of perform­ing surg­eries to treat men­tal ill­ness with­out pa­tient con­sent (Zamiska 2007; Zhang 2017), this offers his­tor­i­cal prece­dent, and high­lights the high pos­si­bil­ity that BCIs could be used harm­fully by an au­thor­i­tar­ian state.

There is an added effect here. BCIs give au­thor­i­tar­ian coun­tries an ex­tra in­cen­tive to de­scend into full to­tal­i­tar­i­anism, and thus makes the ex­is­tence of full to­tal­i­tar­i­anism more likely. The sur­vival of a regime can be greatly as­sisted through cheap, effec­tive surveillance, and cheaper, more effec­tive co­er­cion. Through the use of BCIs, surveillance will be eas­ier, and more ex­ten­sive than ever be­fore. More dan­ger­ous than this, tak­ing an ex­treme level of con­trol over even the most pri­vate thoughts and emo­tions of cit­i­zens will be pos­si­ble, easy, cheap, and use­ful.

A sec­ond fac­tor is the num­ber of pos­si­ble sources. The higher the num­ber of au­thor­i­tar­ian coun­tries, the more agents there are which may choose to use the neu­ral lace to con­trol their pop­u­la­tion; so the higher the like­li­hood of catas­trophic dis­aster. This claim rests on the as­sump­tion that au­thor­i­tar­ian coun­tries are more likely to use BCIs to en­slave their pop­u­la­tion than less au­thor­i­tar­ian, more demo­cratic coun­tries. At pre­sent, there are 91 coun­tries wor­ld­wide (54.5% of coun­tries, and 51.6% of the world’s pop­u­la­tion) un­der non-demo­cratic regimes; 54 of these are un­der au­thor­i­tar­ian regimes (32.3% of all coun­tries) and 37 are un­der hy­brid regimes (22.2% of all coun­tries) (EIU 2019). It is of course pos­si­ble that this may change, and over the last hun­dred years, the num­ber of au­thor­i­tar­ian regimes wor­ld­wide has dropped (Pinker 2018); how­ever, it is un­clear whether such a trend should be ex­pected to con­tinue, halt, or re­verse it­self. Between 2007 and 2019, the num­ber of au­thor­i­tar­ian regimes globally has once again risen, with the global democ­racy in­dex see­ing stag­na­tion and de­cline ev­ery year from 2014 to 2019, with de­clin­ing scores in more than half the world’s coun­tries in 2017, and with 2019 record­ing the worst av­er­age global score recorded by the Global Democ­racy In­dex since the in­dex was first pro­duced in 2006 (EIU 2019). There is a sig­nifi­cant pos­si­bil­ity that global democ­racy will con­tinue to re­treat.

How­ever, num­ber of agents is only an is­sue if those agents are able to gain ac­cess to BCI tech­nol­ogy. Once com­plex, cost-effec­tive BCIs have been de­vel­oped, they will be a strong com­pet­i­tive ad­van­tage to coun­tries that have ac­cess to them. A hugely re­lieved dis­ease bur­den from con­di­tions like Alzheimers and Park­in­sons would have pos­i­tive eco­nomic effects (as well as de­sir­able hu­man­i­tar­ian effects in terms of re­liev­ing suffer­ing), as might BCI aug­men­ta­tion of the work­force. In­di­vi­d­ual coun­tries will want BCIs be­cause it will keep them eco­nom­i­cally com­pet­i­tive. This is com­pa­rable to an eco­nomic arms race. This in it­self does not im­ply that coun­tries will use BCIs to as­sist with to­tal­i­tar­i­anism, or force it on their cit­i­zens; just that many coun­tries, even demo­cratic ones, will have in­cen­tives to en­courage the de­vel­op­ment of BCIs, and the use of BCIs by their cit­i­zens. And the more coun­tries that de­velop and use BCIs, the higher chance there is that we run upon a coun­try that will mi­suse it.

This may be less of a risk if there is a strong abil­ity to pre­vent tech­nolog­i­cal pro­lifer­a­tion. This is con­ceiv­able. How­ever, the level of suc­cess over the last 100 years at pre­vent­ing pro­lifer­a­tion de­pends heav­ily on fea­tures of in­di­vi­d­ual tech­nolo­gies. With many weapons that we seek to limit ac­cess to, such as nu­clear weapons, pro­lifer­a­tion is not stopped by re­strict­ing knowl­edge (which is typ­i­cally very difficult) but by re­strict­ing ma­te­ri­als e.g. ac­cess to en­riched Ura­nium. Based on cur­rent BCIs, it seems like there is no sig­nifi­cant ma­te­ri­als short­age for BCIs, as they do not re­quire any fun­da­men­tally rare or unique ma­te­ri­als. Fur­ther­more, it is eas­ier to pre­vent pro­lifer­a­tion of tech­nolo­gies used by gov­ern­ments and mil­i­taries than it is to pre­vent pro­lifer­a­tion of tech­nolo­gies used by civili­ans. Be­cause BCIs are cur­rently be­ing planned as a wide­spread civilian tech­nol­ogy, other coun­tries are likely to gain ac­cess to them, and have the op­por­tu­nity to re­verse en­g­ineer them. With this in mind, anti-pro­lifer­a­tion meth­ods would need to fo­cus on pre­vent­ing spread of knowl­edge about the de­vel­op­ment or se­cu­rity of BCIs.

Third, in­di­vi­d­ual coun­tries or re­gions will be strongly in­cen­tivised to man­u­fac­ture BCIs do­mes­ti­cally; or to sup­port lo­cal com­pa­nies which man­u­fac­ture BCIs, rather than sup­port­ing for­eign com­pa­nies. The rea­son for this can already be seen in 2020. Many coun­tries at pre­sent, quite rea­son­ably, don’t trust com­pa­nies from for­eign coun­tries with situ­a­tions that may com­pro­mise their na­tional se­cu­rity, or in­for­ma­tion about their cit­i­zens (Hat­maker 2020). Similarly, coun­tries are un­likely to trust com­pa­nies from for­eign coun­tries to have ac­cess and con­trol over the brains of their sub­jects. As a re­sult, ma­jor coun­tries (or re­gions) are more likely to de­velop BCIs do­mes­ti­cally, and not use the BCIs de­vel­oped by for­eign com­pa­nies; and as a re­sult it is likely that a di­ver­sity of BCIs will be de­vel­oped, with a va­ri­ety of ca­pac­i­ties and vary­ing lev­els of gov­ern­ment con­trol/​ac­cess.

If in­di­vi­d­ual coun­tries are in­cen­tivised to man­u­fac­ture their own BCIs then we are more likely to get a di­ver­sity of ap­proaches to pri­vacy. Some com­pa­nies and coun­tries may de­velop BCIs that do not give read or write ac­cess to the com­pany, or to the gov­ern­ment (though this may be some­what op­ti­mistic). Some com­pa­nies/​coun­tries may de­velop BCIs that give read ac­cess, but not write ac­cess; al­low­ing surveillance, but not con­trol. And other coun­tries that de­sire it, will be able to cre­ate their own BCIs where the com­pany/​gov­ern­ment has both read and write ac­cess.

If there were only a sin­gle ac­tor con­trol­ling the de­vel­op­ment of all BCIs, it would likely be an eas­ier situ­a­tion to con­trol and reg­u­late. Hav­ing mul­ti­ple state ac­tors, each of whom can de­cide the level of co­er­cion they wish their BCI to be built with, is a much more com­plex sce­nario. If there is a com­pet­i­tive ad­van­tage to do­ing away with cer­tain free­doms, then there is a rea­son­able pos­si­bil­ity that coun­tries will ex­pand their ac­cess/​con­trol over BCIs. This is es­pe­cially likely con­sid­er­ing the his­tor­i­cal prece­dent that in times of war, coun­tries are of­ten in­cen­tivised to do things that were pre­vi­ously con­sid­ered un­think­able. For ex­am­ple, prior to world war two, strate­gic bomb­ing (the in­ten­tional bomb­ing of civili­ans) was con­sid­ered un­think­able and against in­ter­na­tional law (League of Na­tions 1938; Roo­seveldt 1939); but these norms quickly broke down as the tac­tics were deemed nec­es­sary, and by the end of the war, bomb­ing of civili­ans was no longer con­sid­ered a war crime, but a com­mon fea­ture of war (Veale 1993) (Ells­berg 2017). It is rea­son­ably likely that in fu­ture situ­a­tions of crisis, such re­stric­tions on read and write ac­cess may be re­moved, or emer­gency pow­ers may be de­manded by cer­tain gov­ern­ments. This is a clear se­cu­rity risk. And it means that trust­ing gov­ern­ments to stick by their com­mit­ments may not be a vi­able longterm strat­egy. Fur­ther ev­i­dence can be seen in the steady ex­pan­sion in gov­ern­ment surveillance by even demo­cratic gov­ern­ments.

With this in mind, we at­tach a lower bound prob­a­bil­ity of 30% to the chance that any sin­gle coun­try sets the prece­dent of us­ing BCIs to con­trol their pop­u­la­tion within the next 100 years. Con­sid­er­ing the in­cen­tives com­pa­nies wor­ld­wide will have to pro­lifer­ate the tech­nol­ogy, the difficulty in pre­vent­ing pro­lifer­a­tion of BCIs, the strong in­cen­tives au­thor­i­tar­ian gov­ern­ments will have to use it, the num­ber of au­thor­i­tar­ian gov­ern­ments wor­ld­wide, and the his­tor­i­cal prece­dent of au­thor­i­tar­ian gov­ern­ments us­ing what­ever means they can to sta­bil­ise their regimes, we be­lieve that a 30% chance of catas­trophic in­ci­dent is ex­tremely con­ser­va­tive as a lower bound es­ti­mate.

3.4 Like­li­hood of Global Spread

We define global spread as a situ­a­tion where ei­ther a global to­tal­i­tar­ian dic­ta­tor­ship is es­tab­lished which util­ises BCIs to con­trol the pop­u­lace, and has no ma­jor com­peti­tors; or a multi-po­lar world is es­tab­lished where all ma­jor play­ers are to­tal­i­tar­ian dic­ta­tor­ships that heav­ily util­ise BCIs to con­trol the pop­u­lace.

There are a few mechanisms by which these con­di­tions could oc­cur. One of these is mil­i­tary con­quest; a to­tal­i­tar­ian dic­ta­tor­ship could be­come ag­gres­sively ex­pan­sion­ist and seek to con­trol all other coun­tries. It is con­ceiv­able that BCIs would give dic­ta­to­rial coun­tries an ad­van­tage here, as they may push their sol­diers fur­ther than non-dic­ta­tor­ships would be able to (due to main­tain­ing a fo­cus on in­di­vi­d­ual rights and welfare). How­ever, it is also rea­son­ably likely that hu­man sol­diers will be a less and less es­sen­tial fac­tor in fu­ture wars, so it is un­clear whether BCIs will offer de­ci­sive mil­i­tary ad­van­tages. We also con­sider it im­plau­si­ble the the first BCI cre­ators could re­tain a monopoly on the tech­nol­ogy long enough to con­quer all other ma­jor states.

Over­all, we at­tach a very low like­li­hood of global dom­i­na­tion by mil­i­tary con­quest, pri­mar­ily due to the pres­ence of nu­clear weapons. The pres­ence of a nu­clear re­sponse is a pow­er­ful rea­son to avoid in­va­sion, and is a rea­son to be­lieve that at­tempts at global con­quest through mil­i­tary means are rel­a­tively un­likely in the near fu­ture. As such, nu­clear de­ter­rence may stop a BCI con­trol­led coun­try from spread­ing its con­trol by force to other coun­tries.

Se­cond, it is pos­si­ble that a coun­try does not ex­pand it’s in­fluence globally by force, but in­stead gains global dom­i­nance slowly through eco­nomic in­fluence. In this case, that coun­try could then pop­u­larise BCIs in trib­ute coun­tries that are eco­nom­i­cally de­pen­dent on it, and even­tu­ally start to in­ten­tion­ally mi­suse these im­planted BCIs to main­tain or ex­pand con­trol in those coun­tries. We con­sider this sce­nar­ios to be far more likely than mil­i­tary con­quest. In par­tic­u­lar we con­sider this to be a pos­si­bil­ity be­cause mul­ti­ple schol­ars and poli­ti­ci­ans already pre­dict the rise of china as a sin­gle dom­i­nat­ing global su­per­power (dis­plac­ing the US) as a re­al­is­tic pos­si­bil­ity this by the mid­dle of the cen­tury (Jac­ques 2009; Pills­bury 2015; Press Trust of In­dia 2019).

There is also a third method, which we con­sider most likely. It pos­si­ble that no coun­try ex­pands to gain global dom­i­nance, but that coun­tries in­de­pen­dently fall to to­tal­i­tar­i­anism, cre­at­ing a multi-po­lar global to­tal­i­tar­ian or­der. At pre­sent, this is not a ma­jor con­cern, as dic­ta­tor­ships (even to­tal­i­tar­ian dic­ta­tor­ships) can be over­thrown. Thus the like­li­hood of all coun­tries fal­ling to to­tal­i­tar­i­anism in­di­vi­d­u­ally, with­out hav­ing coun­tries rise back to democ­racy, is low. We sug­gest that BCIs too make the path to such a multi-po­lar to­tal­i­tar­i­anism much more likely.

Firstly, as pre­vi­ously dis­cussed, BCIs in­crease the like­li­hood of to­tal­i­tar­i­anism within any in­di­vi­d­ual coun­try, be­cause once a coun­try be­comes au­thor­i­tar­ian, there is a pow­er­ful rea­son to tran­si­tion to BCI re­in­forced to­tal­i­tar­i­anism, as this would strongly in­crease odds of sur­vival of both the dic­ta­tor and the gov­ern­ment. Se­condly, as dis­cussed pre­vi­ously, BCIs al­low for a sta­ble, sus­tain­able form of to­tal­i­tar­i­anism which would be very hard to re­verse, as re­bel­lion from within would be liter­ally ‘un­think­able’.

Fur­ther­more, BCIs sets up a dan­ger­ous strate­gic equil­ibrium. As pre­vi­ously es­tab­lished, BCIs may make it highly un­likely that dic­ta­tor­ships can be over­thrown from within. Nu­clear weapons make it highly un­likely that ma­jor pow­ers will be over­thrown from out­side. With this in mind, when a coun­try falls to to­tal­i­tar­i­anism, and uses these two tech­nolo­gies to main­tain or­der, with­out ma­jor tech­nolog­i­cal shifts that could in­val­i­date the strate­gic ad­van­tages of these tech­nolo­gies, such a dic­ta­tor­ship is likely to last in­definitely. Democ­ra­cies may choose to pre­serve the free­dom of their con­stituents (though they also may not). How­ever, over time, in­di­vi­d­ual pow­ers may fall to au­thor­i­tar­i­anism, and use BCIs to es­tab­lish ir­re­versible, im­mor­tal to­tal­i­tar­ian dic­ta­tor­ships in their own re­gions. In a world where a) coun­tries that pre­serve men­tal free­dom might pos­si­bly de­gen­er­ate into to­tal­i­tar­ian coun­tries, b) to­tal­i­tar­ian dic­ta­tor­ships are im­mor­tal, and c) there is no available ter­ri­tory for new free coun­tries to be cre­ated, it cre­ates an equil­ibrium where coun­tries will steadily con­verge upon be­com­ing dic­ta­tor­ships. A free coun­try might not be free for­ever, and might at some point col­lapse into dic­ta­tor­ship, and then re­in­force it­self with BCIs. How­ever, with BCIs, once a dic­ta­tor­ship is es­tab­lished, it is likely to last in­definitely. This pro­vides a clear path into a multi-po­lar global to­tal­i­tar­ian or­der.

Con­sid­er­ing the rise of au­thor­i­tar­ian and semi-au­thor­i­tar­ian lead­ers in mul­ti­ple coun­tries wor­ld­wide in just the last 5 years, and the broader trend against democ­racy, with 113 coun­tries see­ing a net de­cline on the global democ­racy in­dex in the last 14 years (EIU, 2019), this seems a re­al­is­ti­cally pos­si­ble, if some­what un­pre­dictable trend.

How­ever, even as­sum­ing such a nega­tive trend were to con­tinue, 100 years is a very short time pe­riod for all ma­jor coun­tries to fall to au­thor­i­tar­i­anism and to­tal­i­tar­i­anism. Stronger trends would be needed to meet that time frame. It is of course pos­si­ble that this pro­cess is ex­ac­er­bated and fall to au­thor­i­tar­i­anism is made much more likely by the oc­cur­rence of dis­asters such as cli­mate change or a limited nu­clear ex­change, which might make re­spond­ing gov­ern­ments vuln­er­a­ble to au­toc­racy (Bee­son 2010; Friche et al 2012; Martin 1990). How­ever, we con­sider the prob­a­bil­ity of global to­tal­i­tar­i­anism be­ing es­tab­lished in the next 100 years to be fairly low. This be­ing said, our low prob­a­bil­ity es­ti­mate should not down­play the risk; by 2100 if the 54.6% of coun­tries that have cur­rently fallen to au­thor­i­tar­ian and hy­brid regimes are re­in­forced by BCI in­duced loy­alty, we may be well on the way to global to­tal­i­tar­i­anism. If BCI re­in­forced to­tal­i­tar­i­anism is already en­trenched in a great num­ber of coun­tries, then the prob­lem much be dras­ti­cally harder to stop, and the over­all risk will be higher. This offers an un­usual strate­gic cir­cum­stance in re­gard to ex­is­ten­tial risks. It is likely that with more time, more coun­tries will fall, and the more to­tal­i­tar­ian coun­tries there are, the harder this prob­lem will be to solve. The like­li­hood of global to­tal­i­tar­i­anism within 200 years may be far higher than the like­li­hood of global to­tal­i­tar­i­anism within 100 years. As such, this is a prob­lem that may be eas­ier to ad­dress ear­lier than later.

With this in mind, we at­tach a fairly low prob­a­bil­ity of global spread of to­tal­i­tar­i­anism within this cen­tury. Due to the na­ture of BCIs, and the strate­gic en­vi­ron­ment that they cre­ate, we ex­pect the like­li­hood of this risk to rise over time, and be­come in­creas­ingly difficult to solve. The longer the prob­lem is left un­ad­dressed, the more coun­tries will fall to au­thor­i­tar­i­anism and use BCIs to en­sure that their rule is in­definitely sta­ble. As such, risk from BCI may not be evenly dis­tributed. Though the like­li­hood of global to­tal­i­tar­i­anism in the next 80 years may be rel­a­tively low, the 21st Cen­tury may be seen as the turn­ing point and ac­tion in the 22nd Cen­tury to ad­dress this prob­lem may come too late.

Con­sid­er­ing the pos­si­ble paths to global to­tal­i­tar­i­anism, we at­tach a lower bound es­ti­mate of a 5% like­li­hood to BCI fa­cil­i­tated spread of global spread of to­tal­i­tar­i­anism within 100 years, and a sig­nifi­cantly higher like­li­hood over the com­ing cen­turies.

3.5 Like­li­hood of Com­plete Reach

With some ex­is­ten­tial risks, there is a rea­son­ably high like­li­hood of af­fect­ing the vast ma­jor­ity of hu­man­ity, but there is no likely mechanism by which they could af­fect all of hu­man­ity. For ex­am­ple, a nat­u­ral pan­demic might be able to af­fect 80% of the world’s pop­u­la­tion, but be highly un­likely to af­fect heav­ily iso­lated groups.

Un­like with nat­u­ral pan­demics, to­tal­i­tar­ian risk does have a mechanism of this kind, as to­tal­i­tar­ian sys­tems have in­ten­tion. It seems likely that once the ma­jor­ity of ma­jor na­tions are en­slaved (us­ing BCIs), the rest are ac­tu­ally much more likely to be en­slaved, as those na­tions ex­pand to in­clude nearby ter­ri­to­ries, and se­cure their con­trol.

With this in mind, we have taken 90% as an ex­treme lower bound, be­cause there may be tech­nolog­i­cal shifts in the fu­ture which up­set this bal­ance, and al­lows un­pre­dictable changes which might al­low small pop­u­la­tions to defend them­selves more effec­tively. It is also con­ceiv­able that power dy­nam­ics be­tween ma­jor pow­ers will leave cer­tain coun­tries pro­tected from in­va­sion.

How­ever, while this is an es­ti­mate of a global to­tal­i­tar­ian sys­tem (or mul­ti­ple sys­tems) con­trol­ling all of hu­man­ity, we ex­pect the im­pact of BCIs on this part of to­tal­i­tar­ian risk to be min­i­mal. If a world to­tal­i­tar­ian gov­ern­ment sought to ex­pand it’s reach to all of hu­man­ity, it is highly likely that it would be able to reach all who could be a con­ceiv­able threat with­out the use of BCIs.

Fur­ther­more, even if there was some small por­tion of es­capees that it was not cost effec­tive for regimes to track down, such a situ­a­tion would have lit­tle effect on the po­ten­tial of hu­man­ity’s fu­ture; it would still be dras­ti­cally diminished. As such, with to­tal­i­tar­ian risk, com­plete reach may not be nec­es­sary. Even if the dis­aster reaches only 99.99% of liv­ing hu­mans, it would still lock in an ir­re­versibly nega­tive fu­ture for all of our de­scen­dants, so this would still count as an ex­is­ten­tial catas­tro­phe. Hav­ing a tiny frac­tion of hu­man­ity free would not have pow­er­ful longterm pos­i­tive effects for hu­man­ity over­all, as it would be van­ish­ingly un­likely that small, un­threat­en­ing pop­u­la­tions could liber­ate hu­man­ity and re­cover a more pos­i­tive fu­ture. As such, we ex­pect the im­pact of BCIs on this vari­able to be rather small.

3.6 Like­li­hood of Last­ing Indefinitely

As a point of defi­ni­tion, in this sec­tion we define in­definitely as “un­til the ex­tinc­tion of hu­man­ity and it’s de­scen­dants”. In a to­tal­i­tar­ian world that is en­tirely, or even mostly im­planted with BCIs, we con­sider in­definite sur­vival of the regime to be highly likely. BCI’s offer, for the first time, the abil­ity to have an en­tirely loyal pop­u­lace, with no dis­sen­ters.

In a sce­nario where all cit­i­zens (in­clud­ing the dic­ta­tor) are im­planted with BCIs, and their emo­tions are rewired for loy­alty to the state, re­sis­tance to the dic­ta­tor­ship, or any end to dic­ta­tor­ship would be in­cred­ibly un­likely. One ma­jor method of es­cap­ing a to­tal­i­tar­ian gov­ern­ment is in­ter­nal over­throw; by ei­ther un­happy masses or by elites and closet skep­tics within the party who are tempted by al­ter­nate sys­tems. The abil­ity to iden­tify all such dis­si­dents as soon as they even con­sider dis­loy­alty makes re­volt less likely. The abil­ity to rewire sub­jects so they are not phys­i­cally able to con­sider such dis­loy­alty in the first place would seem to make re­volt close to im­pos­si­ble. As such, we be­lieve it is quite likely that BCI tech­nol­ogy will provide a dras­tic in­crease to the chance of a global to­tal­i­tar­ian state last­ing in­definitely.

How­ever, it is also pos­si­ble that not all sub­jects will be im­planted or al­tered, be­cause a small rul­ing class may wish to re­main free. This may offer a sce­nario for es­cape if cer­tain parts of the up­per class rebel, or take con­trol and choose to end the cur­rent sub­ju­ga­tion/​choose to re­pro­gram sub­jects back to their nor­mal state. How­ever, it is also far from guaran­teed that the up­per class would re­main free of men­tal al­ter­a­tion. In many au­thor­i­tar­ian regimes, not just the lower classes, but also the up­per classes have been sub­ject to co­er­cion, and it is con­ceiv­able that cer­tain lev­els of con­trol (mea­sures to en­sure loy­alty etc), would be im­ple­mented on all cit­i­zens. Dic­ta­tors may even im­plant and slightly re­pro­gram their heirs, to en­sure they are not be­trayed.

Fur­ther­more, this sce­nario would re­quire not just that the up­per class is to a cer­tain level free of men­tal surveillance and al­ter­a­tion, but also that the up­per class can suc­cess­fully run a coup with­out arous­ing sus­pi­cion and hav­ing their thoughts read, that they are able to take con­trol of the na­tion’s BCIs, and most im­por­tantly, that they choose to let go of their power and free the sub­jects cur­rently con­trol­led by their BCIs, rather than es­tab­lish­ing their own regime.

Fur­ther­more, there may be situ­a­tions we have not con­sid­ered which lower the like­li­hood of such a dic­ta­tor­ship last­ing in­definitely. With this in mind, we have taken the as­sump­tion of 90% as an ex­treme lower bound prob­a­bil­ity.

3.7 Level of Con­fi­dence in these Estimates

We have a low level of con­fi­dence that these es­ti­mates are ex­actly cor­rect. How­ever, we have a mod­er­ate de­gree of con­fi­dence that they are cor­rect within an or­der of mag­ni­tude, and a high level of con­fi­dence that they are cor­rect within two or­ders of mag­ni­tude.

If there is strong rea­son to sug­gest that our es­ti­mates are off by two or­ders of mag­ni­tude or more, then there is a rea­son­able case for ig­nor­ing the risk from BCIs in favour of pri­ori­tis­ing other an­thro­pogenic ex­is­ten­tial risks. Sup­pos­ing our es­ti­mates are out by an or­der of mag­ni­tude they would still on par with risk of ex­tinc­tion from nu­clear war, so should still be pri­ori­tised. If our es­ti­mates are rel­a­tively ac­cu­rate (within an or­der of mag­ni­tude), then the ex­is­ten­tial risk pro­vided by BCIs may be sev­eral times greater than that from nu­clear war. Fi­nally, there is also the pos­si­bil­ity that our es­ti­mates have been too con­ser­va­tive, and have sig­nifi­cantly un­der­rated the level of risk from this tech­nol­ogy.

As such, tak­ing ac­count of this un­cer­tainty leads us to be­lieve that this topic is de­serv­ing of dis­cus­sion and deep anal­y­sis. Even if we do not have ac­cu­rate prob­a­bil­ities, if there is a rea­son­able chance of this tech­nol­ogy hav­ing a sig­nifi­cant nega­tive im­pact on the fu­ture of hu­man­ity, then it would be deeply reck­less to ig­nore the risk it poses. This at­ti­tude of ig­no­rance is the cur­rent sta­tus quo.

4. Other Considerations

4.1 BCI as Ex­is­ten­tial Se­cu­rity Factor

In ad­di­tion to be­ing a risk fac­tor, it is pos­si­ble that BCIs may also serve as an ex­is­ten­tial se­cu­rity fac­tor which may de­crease the risk from AGI. In par­tic­u­lar, two main mechanisms may be rele­vant for this.

In par­tic­u­lar, Elon Musk claims that BCIs may al­low us to in­te­grate with AI such that AI will not need to out­com­pete us (Young, 2019). It is un­clear at pre­sent by what ex­act mechanism a BCI would as­sist here, how it would help, whether it would ac­tu­ally de­crease risk from AI, or if it is a valid claim at all. Such a ‘solu­tion’ to AGI may also be en­tirely com­pat­i­ble with global to­tal­i­tar­i­anism, and may not be de­sir­able. The mechanism by which in­te­grat­ing with AI would lessen AI risk is cur­rently undis­cussed; and at pre­sent, no se­ri­ous aca­demic work has been done on the topic.

4.2 Risk Trade­offs Between Se­cu­rity Factors

Trade­off be­tween the risks of global to­tal­i­tar­i­anism and AGI may be a use­ful point of dis­cus­sion in the fu­ture. It is con­ceiv­able that BCIs could have im­pact re­duc­ing risk from AI, how­ever, at pre­sent, it is un­clear whether this would be the case, but also why giv­ing giv­ing an AGI ac­cess to hu­man brains would the­o­ret­i­cally re­duce risk at all. Claims like this re­quire far more de­tailed ex­am­i­na­tion.

More im­por­tantly, even sup­pos­ing BCIs were a se­cu­rity fac­tor for risk from AGI, it is likely that they are not the only se­cu­rity fac­tor for risk from AGI. As such, it is pos­si­ble that they are not nec­es­sary in the solu­tion at all. There may be other se­cu­rity fac­tors which could be even more effec­tive at re­duc­ing risk from AGI, and which don’t mean­ingfully in­crease any other ex­is­ten­tial risks. Th­ese risk fac­tors would clearly be prefer­able.

As such, it is un­clear that there is wis­dom in defini­tively cre­at­ing an ex­is­ten­tial risk in or­der to gain a mere chance of pro­tect­ing us from a the­o­rised ex­is­ten­tial risk.

Fur­ther­more, if we are to build a tech­nol­ogy which risks hu­man­ity’s fu­ture in the hopes of re­duc­ing an­other risk, this may be a poor strate­gic move if the re­ward is not cer­tain. If BCIs fail to ad­dress risk from AGI, then it may lead us to an over­all in­crease in ex­is­ten­tial risk, and no de­crease what­so­ever.

4.3 Recom­men­da­tions for Fu­ture Research

Given the in­sights from this pa­per, we recom­mend a few di­rec­tions for fu­ture re­search.

  1. More in depth anal­y­sis of the level of in­crease in risk caused by BCIs. In par­tic­u­lar this would be as­sisted by stronger es­ti­mates on the baseline like­li­hood of to­tal­i­tar­ian risk over the next 100 years.

  2. A search for pos­si­ble solu­tions which might re­duce the level of risk caused by BCIs, or which might pre­vent the de­vel­op­ment of this risk fac­tor.

  3. Analyse these solu­tions in terms of cost effec­tive­ness (i.e. how much they might cost com­pared to how much they might de­crease risk by).

  4. Crit­i­cally ex­plore pos­si­ble im­pacts (pos­i­tive and nega­tive) of BCIs on other risks such as the de­vel­op­ment of AGI.

5. Conclusion

This pa­per has sought to iden­tify the po­ten­tial of BCIs to in­crease the like­li­hood of longterm global to­tal­i­tar­i­anism. We sug­gest that even with highly con­ser­va­tive es­ti­mates, BCIs provide an in­crease to ex­is­ten­tial risk that is com­pa­rable to the level of ex­is­ten­tial risk posed by nu­clear war, and al­most dou­ble the risk from global to­tal­i­tar­i­anism given by cur­rent es­ti­mates. If less con­ser­va­tive es­ti­mates are ac­cu­rate, the level of ex­is­ten­tial risk posed by BCIs may be an or­der of mag­ni­tude greater than this.

In ad­di­tion, we iden­tify that with the de­vel­op­ment of BCIs, to­tal­i­tar­i­anism would no longer re­quire global spread to be sus­tain­able in the longterm. We es­tab­lish that the main weak­nesses of to­tal­i­tar­ian coun­tries, caused by defec­tion due to knowl­edge of bet­ter com­pet­ing sys­tems, can all be neu­tral­ised with BCI tech­nol­ogy; speci­fi­cally with the use of brain stim­u­la­tion.

Fi­nally, we es­tab­lish that BCIs set up an un­usual strate­gic en­vi­ron­ment, where the ex­is­ten­tial risk is likely to be­come harder to solve over longer time pe­ri­ods. This gives fur­ther rea­son to ad­dress this risk sooner rather than later, and put sig­nifi­cant effort into into ei­ther pre­vent­ing the de­vel­op­ment of BCIs, or guid­ing their de­vel­op­ment in a safe way, if this is pos­si­ble.

Due to the cur­rent lack of dis­cus­sion about this tech­nol­ogy, and the high level of risk it poses; dou­bling the risk of global to­tal­i­tar­i­anism un­der our most con­ser­va­tive es­ti­mates, and in­creas­ing it by more than an or­der of mag­ni­tude un­der less con­ser­va­tive es­ti­mates, we be­lieve that this risk fac­tor de­serves more dis­cus­sion than it cur­rently re­ceives.

6. References

Abel­son, J., Cur­tis, G., Sagher, O., Albucher, R., Har­ri­gan, M., Tay­lor, S., Mar­tis, B., & Gior­dani, B. (2005). Deep brain stim­u­la­tion for re­frac­tory ob­ses­sive com­pul­sive di­s­or­der. Biolog­i­cal Psy­chi­a­try, 57(5), 510-516.

Alli­son, B., Wol­paw, E., & Wol­paw, J. (2007). Brain-com­puter in­ter­face sys­tems: progress and prospects. Ex­pert Re­view of Med­i­cal De­vices, 4(4), 463-474.

Anu­pama, H., Cau­very, N., & Lin­garaju, G. (2012). Brain Com­puter In­ter­face and its Types—A Study. In­ter­na­tional Jour­nal of Ad­vances in Eng­ineer­ing and Tech­nol­ogy, 3(2), 739-745.

Bee­son, M. (2010). The com­ing of en­vi­ron­men­tal au­thor­i­tar­i­anism. En­vi­ron­men­tal Poli­tics. 19(2), 276-294.

Bel­l­man, C., Martin, M., MacDon­ald, S., Alo­mari, R., & Lis­cano, R. (2018). Have We Met Be­fore? Us­ing Con­sumer-Grade Brain-Com­puter In­ter­faces to De­tect Unaware Fa­cial Recog­ni­tion. Com­put­ers in En­ter­tain­ment, 16(2), 7.

Ber­nal, S., Cel­dran, A., Perez, G., Bar­ros, M & Bala­sub­ra­ma­niam, S. (2019a) Cy­ber­se­cu­rity in Brain Com­puter In­ter­faces: State-of-the-art, op­por­tu­ni­ties, and fu­ture challenges. https://​​arxiv.org/​​pdf/​​1908.03536.pdf. Ac­cessed 18 June 2020.

Ber­nal, S., Huer­tas, A., & Perez, G. (2019b) Cy­ber­se­cu­rity on Brain-Com­puter-In­ter­faces: at­tacks and coun­ter­mea­sures. Con­fer­ence: V Jor­nadas Na­cionales de In­ves­ti­gación en Ciberse­guri­dad.

Bit­tar, R., Kar-Purkayastha, I., Owen, S., Bear, R., Green, A., Wang, S., & Aziz, T. (2005a). Deep brain stim­u­la­tion for pain re­lief: a meta-anal­y­sis. Jour­nal of Clini­cal Neu­ro­science, 12(5), 515-519.

Bit­tar, R., Otero, S., Carter, H., & Aziz, T. (2005b). Deep brain stim­u­la­tion for phan­tom limb pain Jour­nal of Clini­cal Neu­ro­science, 12(4), 399-404.

Bostrom, N. (2012). Ex­is­ten­tial Risk Preven­tion as Global Pri­or­ity. Global Policy, 4(1), 15-31.

Bunce, S., De­varaj, A., Izze­toglu, M., Onaral, B., & Pour­rezaei, K. (2005). De­tect­ing de­cep­tion in the brain: a func­tional near-in­frared spec­troscopy study of neu­ral cor­re­lates of in­ten­tional de­cep­tion. Proc. SPIE Non­de­struc­tive De­tec­tion and Mea­sure­ment for Home­land Se­cu­rity III, 5769.

Bur­well, S., Sam­ple, M. & Racine, E. (2017). Eth­i­cal as­pects of brain com­puter in­ter­faces: a scop­ing re­view. BMC Med Ethics, 18, 60.

Ca­plan, B. (2008). The To­tal­i­tar­ian Threat. In Nick Bostrom & Milan Cirkovic (eds) Global Catas­trophic Risks. Oxford: Oxford Univer­sity Press, 504-520.

Car­mena, J., Lebe­dev, M., Crist, R., O’Do­herty, J. San­tucci, D., Dimitrov, D., Patil, P., Hen­riquez, C., & Ni­colelis, M. (2003). Learn­ing to con­trol a brain-ma­chine in­ter­face for reach­ing and grasp­ing by pri­mates. PLOS Biol­ogy, 1, 193-208.

Chenoweth, E (2011). Why Civil Re­sis­tance Works. New York. Colom­bia Univer­sity Press.

Con­s­tine, J. (2017) Face­book is build­ing brain com­puter in­ter­faces for typ­ing and skin-hear­ing. https://​​techcrunch.com/​​2017/​​04/​​19/​​face­book-brain-in­ter­face/​​?guc­counter=1&guce_refer­rer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_refer­rer_sig=AQAAAF-G70vjq6iq3xbyTrRJX142g75UeLop2lh4vADsXSRxk­bukky53hw2ztIn­vOKfgHxB0fl1YRSQwKJWVEeeXBw8ArIJuNLVh0z2Qb9Pe7sBKUGiEY-a0jkh5PHcArqoPjIc_4srTbiBZzjwN7QsNV3CuooTODW9-TxsZnp5q9RBf. Ac­cessed 25 June 2020.

Cot­ton-Bar­ratt, O., & Ord, T. (2015). Ex­is­ten­tial Risk and Ex­is­ten­tial Hope. Fu­ture of Hu­man­ity In­sti­tute—Tech­ni­cal Re­port, 1.

DARPA. (2019). Six paths to the non­sur­gi­cal fu­ture of brain-ma­chine in­ter­faces. https://​​www.darpa.mil/​​news-events/​​2019-05-20. Ac­cessed on 16 June 2020.

Del­gado, J.M.R. (1969). Phys­i­cal Con­trol of the Mind: Toward a Psy­chociv­i­lized So­ciety. New York: Harper and Rowe.

Deme­tri­ades, A.K., Deme­tri­ades, C.K., Watts, and K. Ashkan. (2010). Brain-ma­chine in­ter­face: the challenge of neu­roethics. The Sur­geon, 8(5), 267–269.

Deuschl, G., Schade-Brit­tinger, C., Krack, P., Volk­mann, J., Schafer, H., Botzel, K., Daniels, C., Deutschlander, A., Dill­man, U., et al (2005) A Ran­dom­ized Trial of Deep-Brain Stim­u­la­tion for Park­in­sons Disease. New England Jour­nal of Medicine, 355, 896-908.

Economist In­tel­li­gence Unit. (2019). Global Democ­racy In­dex 2019: A year of demo­cratic set­back and pop­u­lar protest.

Ells­berg, D. (2017). The Dooms­day Ma­chine: Con­fes­sions of a Nu­clear War Plan­ner. New York: Blooms­bury Pub­lish­ing USA.

Evers, K., & Sig­man, M. (2013). Pos­si­bil­ities and limits of mind-read­ing: a neu­rophilo­soph­i­cal per­spec­tive. Con­scious­ness and Cog­ni­tion, 22, 887-897.

Fritsche, I., Cohrs, J., Kessler, T., & Bauer, J. (2012). Global warm­ing is breed­ing so­cial con­flict: The sub­tle im­pact of cli­mate change threat on au­thor­i­tar­ian ten­den­cies. Jour­nal of En­vi­ron­men­tal Psy­chol­ogy, 32(1), 1-10.

Gaul, V. (2020) Brain Com­puter In­ter­face Mar­ket by Type In­va­sive BCI, Non-in­va­sive BCI and Par­tially In­va­sive BCI), Ap­pli­ca­tion (Com­mu­ni­ca­tion & Con­trol, Health­care, Smart Home Con­trol, En­ter­tain­ment & Gam­ing, and Others): Global Op­por­tu­nity Anal­y­sis and In­dus­try Fore­cast, 2020-2027. Allied Mar­ket Re­search. https://​​www.al­lied­mar­ke­tre­search.com/​​brain-com­puter-in­ter­faces-mar­ket. Ac­cessed 18 June 2020.

Glan­non, W. (2009). Stim­u­lat­ing brains, al­ter­ing minds. Jour­nal of Med­i­cal Ethics, 35, 289–292.

Green­berg, B., Malone, D., Friehs, G., Rezai, A., Kubu, C., Mal­loy, P., Sal­loway, S., Okun, M., Good­man, W., & Ras­mussen, S. (2006). Three-year out­comes in deep brain stim­u­la­tion for highly re­sis­tant ob­ses­sive–com­pul­sive di­s­or­der. Neu­ropsy­chophar­ma­col­ogy, 31, 2384–2393.

Guen­ther F.H., Brum­berg, J.S., Wright, E.J., Nieto-Cas­tanon, A., Tourville, J.A., Panko, M., Law, R., Sie­bert, S.A., Bar­tels, J., An­dreasen, D., Ehirim, P., Mao, H., & Kennedy, P. (2009). A Wire­less Brain-Ma­chine In­ter­face for Real-Time Speech Syn­the­sis. PLoS ONE 4(12), e8218.

Gu­lati, T., Won, S.J., Ra­manathan, DS., Wong, C., Bopepudi, A., Swan­son, R., & Gan­guly, K. (2015) Ro­bust neu­ro­pros­thetic con­trol from the stroke per­ile­sional cor­tex. Jour­nal of Neu­ro­science. 35(22), 8653-8661.

Halpern, C.H., Wolf, J.A., Bale, T.L., Stunkard, A.J., Dan­ish, S.F., Gross­man, M., Jaggi, J., Grady, S., & Bal­tuch, G. (2008). Deep brain stim­u­la­tion in the treat­ment of obe­sity. Jour­nal of Neu­ro­surgery, 109(4), 625–634.

Ha­mani, C., McAn­drews, M., Cohn, M., Oh, M., Zum­steg, D., Shapiro, C., Wennberg, R., & Lozano, A. (2008). Me­mory en­hance­ment in­duced by hy­potha­la­mic/​fornix deep brain stim­u­la­tion.

An­nals of Neu­rol­ogy. 63, 119-123.

Hat­maker, T. (2020). Se­nate seeks to ban Chi­nese app TikTok from gov­ern­ment work phones. https://​​techcrunch.com/​​2020/​​03/​​12/​​hawley-bill-tik­tok-china/​​. Ac­cessed on 17 June 2020.

Huxley, A. (1946). Science, Liberty and Peace. Lon­don: Harper.

Ienca, M. (2015). Neu­ro­pri­vacy, Neu­rose­cu­rity and brain hack­ing: Emerg­ing is­sues in neu­ral en­g­ineer­ing. Bioeth­ica Fo­rum, 8(2), 51-53.

Ienca, M., & Hase­lager P. (2016). Hack­ing the brain: brain-com­puter in­ter­fac­ing tech­nol­ogy and the ethics of neu­rose­cu­rity. Ethics and In­for­ma­tion Tech­nol­ogy, 18, 117-129.

Ienca, M., An­dorno, R. (2017). Towards new hu­man rights in the age of neu­ro­science and neu­rotech­nol­ogy. Life Sciences, So­ciety and Policy, 13, 5.

Ifft, P., Shokur, S., Li, Z., Lebe­dev, M., & Ni­colelis, M. (2013). A brain ma­chine in­ter­face that en­ables bi­man­ual arm move­ment in mon­keys. Science Trans­la­tional Medicine, 5(210), 210ra154.

Jac­ques, M. (2009). When China Rules the World: the End of the Western World and the Birth of a New Global Order. Lon­don: Allen Lane.

Ker­nel: Neu­ro­science as a Ser­vice. (2020). https://​​www.ker­nel.co/​​. Ac­cessed 17 June 2020.

Klein E, Brown T, Sam­ple M, Truitt AR, Go­er­ing S. (2015). Eng­ineer­ing the brain: eth­i­cal is­sues and the in­tro­duc­tion of neu­ral de­vices. Hast­ings Cen­ter Re­port, 45(6), 26–35.

Kotch­etkov, I. S., Hwang, B.Y., Ap­pel­boom, G., Kel­lner, C.P., & Con­nolly, E.S. (2010). Brain-com­puter in­ter­faces: mil­i­tary, neu­ro­sur­gi­cal, and eth­i­cal per­spec­tive. Neu­ro­sur­gi­cal Fo­cus, 28(5).

Ku­mar, K., Toth, C., & Nath, R.K. (1997). Deep brain stim­u­la­tion for in­tractable pain: a 15-year ex­pe­rience. Neu­ro­surgery, 40(4), 736-747.

Lam­mel, S., Lim, B.K., Ran, C., Huang, K.W., Betley, M., Tye, K., Deisseroth, K., & Malenka, R.(2012). In­put-spe­cific con­trol of re­ward and aver­sion in the ven­tral tegmen­tal area. Na­ture, 491, 212–217.

Lax­ton, A.L., & Lozano, A.M. (2013). Deep Brain Stim­u­la­tion for the Treat­ment of Alzheimers Disease and De­men­tias. World Neu­ro­surgery, 80(3-4), 28.el-S28.e8.

League of Na­tions. (1938). Pro­tec­tion of Civilian Pop­u­la­tions Against Bomb­ing From the Air in Case of War. League of Na­tions Re­s­olu­tion, Septem­ber 30 1938, www.dan­nen.com/​de­ci­sion/​int-law.html#d

Levy, R., Lamb, S., & Adams, J. (1987). Treat­ment of chronic pain by deep brain stim­u­la­tion: long term fol­low-up and re­view of the liter­a­ture. Neu­ro­surgery, 21(6), 885-893.

Lips­man, N., & Lozana, A. (2015). Cos­metic neu­ro­surgery, ethics, and en­hance­ment. Cor­re­spon­dence, 2(7), 585-586.

Martin, B (1990). Poli­tics af­ter a nu­clear crisis. Jour­nal of Liber­tar­ian Stud­ies, 9(2), 69-78.

Martin, B (2001). Tech­nol­ogy for Non­vi­o­lent Strug­gle. Lon­don: War Re­sisters In­ter­na­tional.

May­berg, H., Lozano, A., Voon, V., McNeely, H., Sem­inow­icz, D., Ha­mani, C., Sch­walb, J., Kennedy, S. (2005). Deep brain stim­u­la­tion for treat­ment-re­sis­tant de­pres­sion. Neu­ron, 45(5), 651-660.

Maz­zoleni, M., & Pre­v­idi, F. (2015). A com­par­i­son of clas­sifi­ca­tion al­gorithms for brain com­puter in­ter­face in drug crav­ing treat­ment. IFAC Papers On­line, 48(20), 487-492.

Mer­rill, N., Chuang, J. (2018). From Scan­ning Brains to Read­ing Minds: talk­ing to en­g­ineers about Brain Com­puter In­ter­face. Pro­ceed­ings of the 2018 CHI Con­fer­ence on Hu­man Fac­tors in Com­put­ing Sys­tems, Paper no 323. 1-11.

Moore, M.M. (2003). Real-world ap­pli­ca­tions for brain-com­puter in­ter­face tech­nol­ogy. IEEE Trans­ac­tions on Neu­ral Sys­tems and Re­ha­bil­i­ta­tion Eng­ineer­ing. 11(2), 162-165.

Moses, D.A., Leonard, M.K., Makin, J.G., & Chang, E. (2019). Real-time de­cod­ing of ques­tion-and-an­swer speech di­alogue us­ing hu­man cor­ti­cal ac­tivity. Na­ture Com­mu­ni­ca­tions, 10, 3096.

Mun­yon, C. (2018). Neu­roethics of Non-pri­mary Brain Com­puter In­ter­face: Fo­cus on Po­ten­tial Mili­tary Ap­pli­ca­tions. Front Neu­rosci, 12, 696.

Musk, E. (2019). An In­te­grated brain-ma­chine in­ter­face plat­form with thou­sands of chan­nels. J Med In­ter­net Res, 21(10).

Nijboer, F., Clausen, J., Ali­son, B., & Hase­lager, P. (2011). The Asilo­mar Sur­vey: Stake­hold­ers Opinions on Eth­i­cal Is­sues re­lated to brain-com­puter in­ter­fac­ing. Neu­roethics, 6, 541-578.

Ord, T. (2020). The Precipice. Oxford: Oxford Univer­sity Press.

Or­well, G. (1945). You and the Atomic Bomb. Tribune (19 Oc­to­ber 1945), reprinted in The Col­lected Es­says, Jour­nal­ism and Let­ters of Ge­orge Or­well, Vol. 4: In Front of Your Nose, 1945–1950, ed. So­nia Or­well and Ian An­gus (Lon­don: Seeker and War­burg, 1968), pp. 6-9. www.or­well.ru/​library/​ar­ti­cles/​ABomb/​en­glish/​e_abomb Ac­cessed at 10 June 2020.

Per­l­mut­ter, J., and Mink, J. (2006). Deep Brain Stim­u­la­tion An­nual Re­view of Neu­ro­science 29. 229-257.

Pills­bury, M. (2015) The Hun­dred-Year Marathon: China’s Se­cret Strat­egy to Re­place Amer­ica as the Global Su­per­power. New York: Henry Holt.

Pinker, S. (2018). En­light­en­ment Now: The Case for Rea­son, Science, Hu­man­ism and Progress. New York: Vik­ing.

Press Trust of In­dia. (2019). China most likely to be­come sole global su­per­power by mid-21st Cen­tury, says Repub­li­can Se­na­tor Mitt Rom­ney. https://​​www.first­post.com/​​world/​​china-most-likely-to-be­come-sole-global-su­per­power-by-mid-21st-cen­tury-says-re­pub­li­can-sen­a­tor-mitt-rom­ney-7749851.html. Ac­cessed 22 June 2020.

Reglado, A. (2017). The en­trepreneur with the $100 mil­lion plan to link brains to com­put­ers. Tech­nol­ogy Re­view. https://​​www.tech­nol­o­gyre­view.com/​​2017/​​03/​​16/​​153211/​​the-en­trepreneur-with-the-100-mil­lion-plan-to-link-brains-to-com­put­ers/​​. Ac­cessed 17 June 2020.

Roelfsema, P., Denys, D & Klink, P.C. (2018). Mind read­ing and writ­ing: the fu­ture of neu­rotech­nol­ogy. Trends in Cog­ni­tive Sciences, 22(7). 598-610.

Roo­seveldt, F.D. (1939). An ap­peal to Great Bri­tain, France, Italy, Ger­many and Poland to re­frain from Air Bomb­ing of Civili­ans. www.pres­i­dency.ucsb.edu/​ws/​?pid=15797. Ac­cessed 11 June 2020.

Sam­berg, A., & Bostrom, N. (2011). Ma­chine In­tel­li­gence Sur­vey. FHI Tech­ni­cal Re­port, (1).

Sharp, G. (1973). The Poli­tics of Non­vi­o­lent Ac­tion. Bos­ton, MA: Porter Sar­gent.

Statt, N. (2017). Ker­nel is try­ing to hack the hu­man brain—but neu­ro­science has a long way to go. https://​​www.thev­erge.com/​​2017/​​2/​​22/​​14631122/​​ker­nel-neu­ro­science-bryan-john­son-hu­man-in­tel­li­gence-ai-startup). Ac­cessed 17 June 2020.

Suthana, N., Ha­neef, Z., Stern, J., Mukamel, R., Behnke, E., Knowl­ton, B., & Fried, I. (2012). Me­mory en­hance­ment and deep-brain stim­u­la­tion of the en­torhi­nal area. N Engl J Med. 366, 502-510.

Tor­res, P. (2016). Cli­mate Change is the most ur­gent ex­is­ten­tial risk. Fu­ture of Life In­sti­tute. https://​​fu­ture­oflife.org/​​2016/​​07/​​22/​​cli­mate-change-is-the-most-ur­gent-ex­is­ten­tial-risk/​​?cn-reloaded=1. Ac­cessed 11 June 2020.

Tsai, H.C., Zhang, F., Adaman­tidis, A., Stubler, G., Bonci, A., Le­cea, L., & Deisseroth, K. (2009). Pha­sic firing in dopamin­er­gic neu­rons is suffi­cient for be­hav­ioral con­di­tion­ing. Science, 324(5930), 1080–1084.

Tucker, P. (2018). Defence In­tel Chief Wor­ried About Chi­nese ‘In­te­gra­tion of Hu­man and Machines’ in Defence One. https://​​www.defenseone.com/​​tech­nol­ogy/​​2018/​​10/​​defense-in­tel-chief-wor­ried-about-chi­nese-in­te­gra­tion-hu­man-and-ma­chines/​​151904/​​. Ac­cessed 17 June 2020.

Veale, F.J.P (1993). Ad­vance to Bar­barism: The Devel­op­ment of To­tal War­fare from Sara­jevo to Hiroshima. Lon­don: The Mitre Press.

Wright, R (2004). A Short His­tory of Progress. Toronto: Anansi Press.

Wu, S., Xu, X., Shu, L., & Hu, B (2017). Es­ti­ma­tion of valence of emo­tion us­ing two frontal EEG chan­nels. In 2017 IEEE in­ter­na­tional con­fer­ence on bioin­for­mat­ics and biomedicine (BIBM) (pp. 1127–1130).

Young, C. (2019). Key take­aways from Elon Musk’s Neu­ral­ink Pre­sen­ta­tion: Solv­ing Brain Diseases and Miti­gat­ing AI Threat. https://​​in­ter­est­in­geng­ineer­ing.com/​​key-take­aways-from-elon-musks-neu­ral­ink-pre­sen­ta­tion-solv­ing-brain-dis­eases-and-miti­gat­ing-ai-threat. Ac­cessed 25 June 2020.

Zamiska, N. (2007). In China, brain surgery is pushed on the men­tally ill. Wall St. J. https://​​www.wsj.com/​​ar­ti­cles/​​SB119393867164279313#:~:text=The%20ir­re­versible%20brain%20surg­eries%20performed,fund­ing%20and%20hun­gry%20for%20profit. Ac­cessed 17 June 2020.

Zhang, S., Zhou, P., Jiang, S., Li, P., & Wang W. (2017). Bilat­eral an­te­rior cap­su­lo­tomy and amyg­dalo­tomy for men­tal re­tar­da­tion with psy­chi­a­tric symp­toms and ag­gres­sion: A case re­port. Medicine (Bal­ti­more), 96(1), 10-13.