On AI Weapons

In this post I com­pre­hen­sively re­view the risks and up­sides of lethal au­tonomous weapons (LAWs). I in­cor­po­rate and ex­pand upon the ideas in this pre­vi­ous post of mine and the com­ments, plus other re­cent de­bates and pub­li­ca­tions.

My prin­ci­ple con­clu­sions are:

1. LAWs are more likely to be a good de­vel­op­ment than a bad one, though there is quite a bit of un­cer­tainty and one could jus­tify be­ing neu­tral on the mat­ter. It is not jus­tified to ex­pend effort against the de­vel­op­ment of lethal au­tonomous weapons, as the pros do not out­weigh the cons.

2. If some­one still op­poses lethal au­tonomous weapons, they should fo­cus on di­rectly mo­ti­vat­ing steps to re­strict their de­vel­op­ment with an in­ter­na­tional treaty, rather than fo­ment­ing gen­eral hos­tility to LAWs in Western cul­ture.

3. The con­cerns over AI weapons should pivot away from ac­ci­dents and moral dilem­mas, to­wards the ques­tion of who would con­trol them in a do­mes­tic power strug­gle. This is­sue is both more im­por­tant and more ne­glected.

Back­ground: as far as I can tell, there has been no se­ri­ous anal­y­sis judg­ing whether the in­tro­duc­tion of LAWs would be a good de­vel­op­ment or not. De­spite this lack of foun­da­tion, a few mem­bers in or around the EA com­mu­nity have made some efforts to at­tempt to stop the new tech­nol­ogy from be­ing cre­ated, most no­tably the Fu­ture of Life In­sti­tute. So we should take a care­ful look at this is­sue and see whether these efforts ought to be scaled up, or if they are harm­ful or merely a waste of time.

This ar­ti­cle is laid out as a sys­tem­atic clas­sifi­ca­tion of po­ten­tial im­pacts. I’m not fram­ing it as a di­rect re­sponse to any spe­cific liter­a­ture be­cause the ex­ist­ing ar­gu­ments about AI weapons are pretty scat­tered and un­struc­tured.

Re­spon­si­bil­ity: can you hold some­one re­spon­si­ble for a death caused by an LAW?

Op­po­nents of LAWs fre­quently re­peat the worry that they pre­vent us from hold­ing peo­ple re­spon­si­ble for bad ac­tions. But the idea of “hold­ing some­one re­spon­si­ble” is vague lan­guage and there are differ­ent rea­sons to hold peo­ple re­spon­si­ble for bad events. We should break this down into more spe­cific con­cerns so that we can talk about it co­her­ently.

Sense of jus­tice to vic­tims, fam­ily and compatriots

When crimes are com­mit­ted, it’s con­sid­ered benefi­cial for af­fected par­ties to see that the per­pe­tra­tors are brought to jus­tice. When a state pun­ishes one of its own peo­ple for a war crime, it’s a cred­ible sig­nal that it dis­avows the ac­tion, pro­vid­ing re­as­surance to those who are sus­pi­cious or afraid of the state. Un­pun­ished mil­i­tary crimes can be­come fes­ter­ing wounds of so­ciopoli­ti­cal grievance. (See, for ex­am­ple, the mur­der of Gur­gen Ma­garyan.)

But if an LAW takes a de­struc­tive ac­tion, there are many peo­ple who could be pun­ished for it:

· The op­er­a­tional com­man­der, if she au­tho­rized the use of LAWs some­where where it wasn’t worth the risks.

· The tac­ti­cal com­man­der or the Range Safety Officer, if the LAW was not em­ployed prop­erly or if ap­pro­pri­ate pre­cau­tions were not taken to pro­tect vuln­er­a­ble peo­ple.

· The main­te­nance per­son­nel, if they were neg­li­gent in pre­vent­ing or de­tect­ing a defi­ciency with the weapon.

· The pro­cure­ment team, if they were neg­li­gent in ver­ify­ing that the new sys­tem met its speci­fied re­quire­ments.

· The weapons man­u­fac­turer, if they failed to build a sys­tem that met its speci­fied/​ad­ver­tised re­quire­ments.

· The pro­cure­ment team, if they is­sued a flawed set of re­quire­ments for the new sys­tem.

If ev­ery­one does their job well, and a deadly ac­ci­dent still hap­pens, then the af­fected par­ties may not feel much grievance at all, since it was just a tragedy, so that they wouldn’t de­mand jus­tice in the first place. Now with the un­cer­tainty and dis­in­for­ma­tion of the real world, they may falsely be­lieve that some­one de­serves to be pun­ished; how­ever peo­ple can ba­si­cally be scape­goated – this can hap­pen any­way with hu­man-caused ac­ci­dents; some­one in the bu­reau­cracy takes the blame. And fi­nally, the AI it­self might be pun­ished, if that pro­vides some sense of clo­sure – it seems ridicu­lous now, but with more so­phis­ti­cated LAWs, that could change.

Over­all, when it comes to the need to provide a sense of jus­tice to ag­grieved par­ties af­ter war crimes and ac­ci­dents, the re­place­ment of hu­mans with LAWs leaves a va­ri­ety of op­tions on the table. In ad­di­tion, it might help pre­vent the sense of grievance from turn­ing up in the first place.

In­sti­tu­tional in­cen­tives against failures

We hold peo­ple re­spon­si­ble for crimes be­cause this pre­vents crimes from be­com­ing more com­mon. If LAWs are de­vel­oped and de­ployed but no one in this pro­cess faces penalties for bad be­hav­ior, there could be no se­ri­ous re­straint against wide­spread, re­peated failures.

Look­ing at re­cent AI de­vel­op­ment, we can see that in­cen­tives against AI failure re­ally don’t re­quire the iden­ti­fi­ca­tion of a spe­cific guilty per­son. For in­stance, there has been sig­nifi­cant back­lash against North­pointe’s COMPAS al­gorithm af­ter a ProPublica anal­y­sis ar­gued that it pro­duced racially bi­ased re­sults. Some­times these in­cen­tives don’t re­quire failures to oc­cur at all; Google halted its in­volve­ment with Pro­ject Maven af­ter em­ploy­ees op­posed the ba­sic idea of Google work­ing on mil­i­tary AI tech. And Ax­ios re­jected the use of fa­cial recog­ni­tion on its body cam footage af­ter its ethics panel de­cided that such tech­nol­ogy wouldn’t be suit­ably benefi­cial. There is a per­va­sive cul­ture of fear about crimes, ac­ci­dents, and dis­parate im­pacts per­pe­trated by AI, such that any­one work­ing on this tech­nol­ogy will know that their busi­ness suc­cess de­pends in part on their abil­ity to avoid such per­cep­tions.

That be­ing said, this kind of pop­ulist moral pres­sure is fickle and un­re­li­able, as gov­ern­ments and com­pa­nies are some­times will­ing to over­look it. We should have proper in­sti­tu­tional mechanisms rather than rely­ing on so­cial pres­sure. And ideally, such mechanisms could dampen the flames of so­cial pres­sure, so that the in­dus­try is po­liced in a man­ner which is more fair, more pre­dictable, and less bur­den­some.

But just as noted above, if an LAW com­mits a bad ac­tion, there are many ac­tors who might be pun­ished for it:

· The op­er­a­tional com­man­der, if she au­tho­rized the use of LAWs some­where where it wasn’t worth the risks.

· The tac­ti­cal com­man­der or the Range Safety Officer, if the LAW was not em­ployed prop­erly or if ap­pro­pri­ate pre­cau­tions were not taken to pro­tect vuln­er­a­ble peo­ple and equip­ment.

· The main­te­nance per­son­nel, if they were neg­li­gent in pre­vent­ing or de­tect­ing a defi­ciency with the weapon.

· The pro­cure­ment team, if they were neg­li­gent in ver­ify­ing that the new sys­tem met its speci­fied re­quire­ments.

· The weapons man­u­fac­turer, if they failed to build a sys­tem that met its speci­fied/​ad­ver­tised re­quire­ments.

· The pro­cure­ment team, if they is­sued a flawed set of re­quire­ments for the new sys­tem.

The idea that au­to­mated weapons pre­vent peo­ple from be­ing pun­ished for failures is eas­ily re­jected by any­one with mil­i­tary ex­pe­rience be­cause we are well ac­cus­tomed to the fact that, even if Pri­vate Joe is the one who pul­led the trig­ger on a friendly or a civilian, the fault of­ten lies with an officer or his first-line non­com­mis­sioned officer who gets dis­ci­plined ac­cord­ingly. Re­plac­ing Pri­vate Joe with a robot doesn’t change that. The mil­i­tary has an in­sti­tu­tional con­cern for safety and proac­tively as­signs re­spon­si­bil­ity for risk miti­ga­tion.

That be­ing said, there are some prob­lems with ‘au­toma­tion bias’ where peo­ple are prone to offload too much re­spon­si­bil­ity to ma­chines (Scharre 2018), thus limit­ing their abil­ity to prop­erly re­spond to in­sti­tu­tional in­cen­tives for proac­tive safety. How­ever this could be ame­lio­rated with proper fa­mil­iariza­tion as these au­tonomous sys­tems are more thor­oughly used and un­der­stood. It’s not clear if this kind of bias will per­sist across cul­tures and in the long run. It’s also not clear if it is greater than hu­mans’ ten­dency to offload too much re­spon­si­bil­ity to other hu­mans.

Are the ex­ist­ing laws and poli­cies ad­e­quate?

Even if ac­countabil­ity for LAWs is the­o­ret­i­cally ac­cept­able, the ex­ist­ing poli­cies and laws for things like war crimes and fra­t­ri­cide might have loop­holes or other defi­cien­cies for bad ac­tions per­pe­trated by LAWs. I don’t know if they ac­tu­ally do. But if so, the laws and guidelines should and will be up­dated ap­pro­pri­ately as force com­po­si­tions change. There may be some defi­cien­cies in the mean­time, but re­mem­ber that ban­ning LAWs would take time and le­gal work to achieve in the first place. In the time that it takes to achieve a na­tional or in­ter­na­tional LAW ban, laws and poli­cies could be suit­ably up­dated to ac­com­mo­date LAW failures any­way. And ac­tual LAW de­vel­op­ment and de­ploy­ment will take time as well

Conclusion

The need to hold some­one re­spon­si­ble for failures gen­er­ally does not pose a ma­jor rea­son against the de­vel­op­ment and de­ploy­ment of LAWs. There is some po­ten­tial for prob­lems where op­er­a­tors offload too much re­spon­si­bil­ity to the ma­chine be­cause they mi­s­un­der­stand its ca­pa­bil­ities and limi­ta­tions, but it’s not ap­par­ent that this gives much over­all rea­son to op­pose the tech­nol­ogy.

Ex­ist­ing laws and poli­cies on war may be in­suffi­cient to ac­com­mo­date LAWs, but they can be ex­pected to im­prove in the same time that it would take for LAW bans or de­vel­op­ment/​de­ploy­ment to ac­tu­ally oc­cur.

Do LAWs cre­ate a moral haz­ard in fa­vor of con­flict?

An ob­vi­ous benefit of LAWs is that they keep sol­diers out of harm’s way. But some peo­ple have wor­ried that re­mov­ing sol­diers from con­flict zones re­moves a strong poli­ti­cal in­cen­tive against war­mak­ing, which means more war and more ca­su­alties from col­lat­eral dam­age. This ar­gu­ment would also ap­ply against tech­nolo­gies like body ar­mor and the re­place­ment of con­scripted armies with smaller pro­fes­sional fight­ing forces, as these de­vel­op­ments also re­duce bat­tlefield deaths, but cu­ri­ously peo­ple don’t con­demn those de­vel­op­ments. Thus, the use of this ar­gu­ment is sus­pi­ciously ar­bi­trary. And it does seem wrong to say that we should all use con­scripted armies as can­non fod­der, or ban body ar­mor, be­cause more grievous bat­tlefield ca­su­alties will en­sure that poli­ti­ci­ans don’t start too many wars. I would sooner as­sume that the net effect of mil­i­tary safety in­no­va­tions is to re­duce over­all deaths, de­spite small po­ten­tial in­creases in the fre­quency of con­flict. In­ter­na­tional re­la­tions liter­a­ture gen­er­ally doesn’t fo­cus on mil­i­tary deaths as be­ing the sole or pri­mary ob­sta­cle to mil­i­tary con­flict; in re­al­ity states fol­low pat­terns of broader in­cen­tives like state se­cu­rity, regime preser­va­tion, in­ter­na­tional law and norms, elite opinion, pop­u­lar opinion and so on, which are in turn only par­tially shaped by bat­tlefield ca­su­alties. You could elimi­nate com­bat­ant deaths en­tirely, and there would still be a va­ri­ety of rea­sons for states to not go to war with each other. For one thing, they would definitely seek to avoid the risk of civilian ca­su­alties in their own coun­tries.

Still, it’s not ob­vi­ous that this logic is wrong. So let’s take a more care­ful look. A key part of this is the civilian ca­su­alty ra­tio: how many vic­tims of war are civili­ans, as op­posed to sol­diers? Eck­hardt (1989) took a com­pre­hen­sive look at war across cen­turies and found that the ra­tio is about 50%. A New York Times ar­ti­cle claimed that a 2001 Red Cross study iden­ti­fied a civilian ca­su­alty ra­tio of 10:1 (so 91% of ca­su­alties are civili­ans) for the later 20th cen­tury, but I can­not find the study any­where. Since we are speci­fi­cally look­ing at mod­ern, tech­nolog­i­cal war­fare, I de­cided to look at re­cent war statis­tics on my own.

Con­ven­tional, in­ter­state conflict

The dead­liest re­cent in­ter­state war was the 1980-1988 Iran-Iraq War. Es­ti­mates vary widely, but the one source which de­scribes both mil­i­tary and civilian ca­su­alties lists 900,000 mil­i­tary dead and 100,000 civilian dead. Re­cent cen­sus ev­i­dence sug­gests the mil­i­tary death tolls might ac­tu­ally have been far lower, pos­si­bly just a few hun­dred thou­sand. One could also in­clude the An­fal Geno­cide for an­other 50,000-100,000 civilian deaths. So the civilian causalty ra­tio for the Iran-Iraq War was be­tween 10% and 40%.

In the 1982 Falk­lands War, 3 civili­ans were kil­led com­pared to 900 mil­i­tary per­son­nel, for a civilian ca­su­alty ra­tio of 0.3%.

Ca­su­alty figures for the 1982 Le­banon War are deeply dis­puted and un­cer­tain. To­tal deaths prob­a­bly amount to 15,000-20,000. A study by an in­de­pen­dent news­pa­per in Beirut, sup­ple­mented by Le­banese offi­cials, found that ap­prox­i­mately half of the lo­cal dead were civili­ans; mean­while an offi­cial at a re­lief or­ga­ni­za­tion claimed that 80% of the were civili­ans (Race and Class). 657 Is­raeli mil­i­tary deaths, and 460-3,500 civilian mas­sacre vic­tims, could change this figure a lit­tle bit. Yasser Arafat and the Is­raeli gov­ern­ment have both made more widely differ­ing claims, which should prob­a­bly be con­sid­ered un­re­li­able. Over­all, the civilian ca­su­alty ra­tio for the 1982 Le­banon War was prob­a­bly be­tween 50% and 80%.

A 2002 Medact re­port sum­ma­rizes a cou­ple sources on Iraqi ca­su­alties in the 1991 Gulf War, but Kuwaiti and coal­i­tion deaths should be in­cluded as well. Then the Gulf War caused 105,000-125,000 mil­i­tary deaths and 8,500-26,000 prox­i­mate civilian deaths, for a civilian ca­su­alty ra­tio be­tween 8% and 17%. The num­ber of civilian ca­su­alties can be con­sid­ered much higher if we in­clude in­di­rect deaths from things like the wors­ened pub­lic health situ­a­tion in Iraq, but we should not use this method­ol­ogy with­out do­ing it con­sis­tently across the board (I will re­turn to this point in a mo­ment).

The Bos­nian War kil­led 57,700 sol­diers and 38,200 civili­ans, for a civilian ca­su­alty ra­tio of 40%.

Az­eri civilian ca­su­alties from the Nagorno-Karabakh War are un­known; just look­ing at the Ar­me­nian side, there were 6,000 mil­i­tary and 1,264 civilian deaths, for a civilian ca­su­alty ra­tio of 17%.

The Kosovo War kil­led 2,500-5,000 sol­diers and 13,548 civili­ans, for a civilian ca­su­alty ra­tio be­tween 73% and 84%.

The 2003 in­va­sion of Iraq kil­led ap­prox­i­mately 6,000 com­bat­ants and 4,000 civili­ans ac­cord­ing to a Pro­ject on Defense Alter­na­tives study, for a civilian ca­su­alty ra­tio of 40%.

The 2008 Russo-Ge­or­gian War kil­led 350 com­bat­ants and 400 civili­ans, for a civilian ca­su­alty ra­tio of 53%.

The War in Don­bass, while tech­ni­cally not an in­ter­state con­flict, is the most re­cent case of con­ven­tional war­fare be­tween or­ga­nized ad­vanced com­bat­ants. There have been 10,000 mil­i­tary deaths and 3,300 civilian deaths, for a civilian ca­su­alty ra­tio of 25%.

The in­ter­na­tional mil­i­tary in­ter­ven­tion against ISIL was for the most part fought against a con­ven­tion­ally or­ga­nized proto-state, even though ISIL wasn’t offi­cially a state. There were 140,000 com­bat­ant deaths and 54,000 civilian deaths in Iraq and Syria, for a civilian ca­su­alty ra­tio of 28%.

There have also been a few wars where I can’t find any in­for­ma­tion in­di­cat­ing that there was a large num­ber of civilian causalties, like the Toy­ota War and the Kargil War, though I haven’t done a deep dive through sources.

Over­all, it looks like civili­ans tend to con­sti­tute a large minor­ity of prox­i­mate deaths in mod­ern in­ter­state war­fare, though it can vary greatly. How­ever, civilian ca­su­alties could be much higher if we ac­count for all in­di­rect effects. For in­stance, the Gulf War led to a va­ri­ety of nega­tive health out­comes in Iraq, mean­ing that to­tal civilian deaths could add up to sev­eral hun­dred thou­sand (Medact). When con­sid­er­ing the long-run effects of things like re­fugee dis­place­ment and in­sti­tu­tional shock and de­cay, which of­ten have dis­pro­por­tionate im­pact on chil­dren and the poor, it may well be the case that civili­ans bear the ma­jor­ity of bur­dens from mod­ern con­ven­tional war­fare.

The par­tial flaw in this line of rea­son­ing is that it must also be ap­plied to al­ter­na­tives. For in­stance, if dis­putes are an­swered with eco­nomic sanc­tions rather than war­fare, then similar in­di­rect effects may worsen or end many civilian lives, while leav­ing mil­i­tary per­son­nel largely un­scathed. Frozen con­flicts can lead to im­pov­er­ished re­gions with in­fe­rior poli­ti­cal sta­tus and sys­tem­atic un­der-pro­vi­sion­ing of in­sti­tu­tions and pub­lic goods. If dis­putes are re­solved by le­gal pro­cesses or di­alogue, then there prob­a­bly aren’t such nega­tive in­di­rect effects, but this is of­ten un­re­al­is­tic for se­ri­ous in­ter­state dis­putes on the an­ar­chic in­ter­na­tional stage (at least not un­less backed up by a cred­ible will­ing­ness to use mil­i­tary force). More­over, vet­eran mor­tal­ity as an in­di­rect re­sult of com­bat ex­pe­rience can also be high.

So let’s say that our ex­pec­ta­tion for the civilian ca­su­alty ra­tio in the fu­ture is 50%. Then, if mil­i­taries be­came com­pletely au­to­mated, the fre­quency of war would have to more-than­-dou­ble in or­der for to­tal death to in­crease. On its face, this seems highly un­likely. If the civilian ca­su­alty ra­tio were 75%, then fully au­tomat­ing war would only have to in­crease its fre­quency by more than one-third for to­tal deaths to in­crease, but still this seems un­likely to me. It is difficult to see how a na­tion will be­come much more will­ing to go to war when, no mat­ter how well au­to­mated its mil­i­tary is, it will face high num­bers of civilian ca­su­alties.

NATO par­ti­ci­pa­tion in over­seas conflicts

The fact that peo­ple use the moral haz­ard ar­gu­ment against AI weapons but haven’t used it against other de­vel­op­ments such as body ar­mor may be par­tially caused by poli­ti­cal cir­cum­stance. The typ­i­cal frame of refer­ence for Western anti-war at­ti­tudes is not near-peer con­ven­tional con­flict, but re­cent wars (usu­ally coun­ter­in­sur­gen­cies) where Amer­ica and Western Euro­pean states have suffered rel­a­tively few ca­su­alties com­pared to very large num­bers of en­e­mies and civili­ans. For in­stance, in the Iraq War, coal­i­tion deaths to­taled 4,800 whereas Iraqi deaths to­taled 35,000 (com­bat­ants) and 100,000 (civili­ans). And Western liberal states are fa­mously averse to suffer­ing mil­i­tary ca­su­alties, at least when vot­ers do not per­ceive the war effort as nec­es­sary for their own se­cu­rity. So in this con­text, the ‘moral haz­ard’ ar­gu­ment makes more sense.

How­ever, in this spe­cific con­text, it’s not clear that we should de­sire less war­mak­ing willpower in the first place! Note that the wor­ries over ca­su­alties were not a sig­nifi­cant ob­sta­cle to the rapid and op­er­a­tionally suc­cess­ful in­va­sions of Kuwait, Iraq and Afghanistan. Rather, such aver­sion to ca­su­alties pri­mar­ily serves as an ob­sta­cle to pro­tracted coun­ter­in­sur­gency efforts like the fol­low-on cam­paigns in Iraq and Afghanistan. And these sta­bil­ity efforts are good at least half the time. Con­sider that Amer­ica’s with­drawal from Iraq in 2011 par­tially con­tributed to the growth of ISIL and the 2014-2017 Iraqi Civil War (NPR). The re­sur­gence of the Tal­iban in Afghanistan was par­tially caused by the lack of in­ter­na­tional forces to bolster Afghanistan’s gov­ern­ment in the wake of the in­va­sion (Jones 2008). In 2006, 2007, and 2009, the ma­jor­ity of Afghans sup­ported the pres­ence of US and NATO forces in Afghanistan, with an over­whelming ma­jor­ity op­pos­ing Tal­iban rule (BBC). Mili­tary op­er­a­tions can be very cost effec­tive com­pared to do­mes­tic pro­grams: Oper­a­tion In­her­ent Re­solve kil­led ISIL fighters for less than $200,000 each, per US gov­ern­ment statis­tics (Fox News); this is low com­pared to the cost of neu­tral­iz­ing a vi­o­lent crim­i­nal or sav­ing a few lives in the US. A va­ri­ety of re­search in­di­cates that peace­keep­ing is a valuable ac­tivity.

While some peo­ple have ar­gued that a more paci­fist route would pro­mote stronger col­lec­tive norms against war­fare in the long run, this ar­gu­ment cuts both ways: sta­bil­ity op­er­a­tions can pro­mote stronger norms against rad­i­cal Is­lamist ide­ol­ogy, un­lawful re­bel­lion, eth­nic cleans­ing, vi­o­la­tions of UN and NATO re­s­olu­tions, and so on.

In any case, Amer­ica is slowly de-es­ca­lat­ing its in­volve­ment in the Mid­dle East in or­der to fo­cus more on pos­tur­ing for great power com­pe­ti­tion in the Indo-Pa­cific. So this con­cern will prob­a­bly not be as im­por­tant in the fu­ture as it seems now.

Conclusion

For con­ven­tional con­flicts, re­duc­ing sol­dier ca­su­alties prob­a­bly won’t in­crease the fre­quency of war­fare very much, so LAW use will re­duce to­tal death (as­sum­ing that they func­tion similarly to hu­mans, which we will in­ves­ti­gate be­low). For over­seas coun­ter­in­sur­gency and coun­tert­er­ror­ism op­er­a­tions such as those performed by NATO coun­tries in the Mid­dle East, the availa­bil­ity of LAWs could make mil­i­tary op­er­a­tions more fre­quent, but this could just as eas­ily be a good thing rather than a bad one (again, as­sum­ing that the LAWs have the same effects on the bat­tlefield as hu­mans).

Will LAWs worsen the con­duct of war­fare?

Accidents

AI sys­tems can make mis­takes and cause ac­ci­dents, but this is a prob­lem for hu­man sol­diers as well. Whether ac­ci­dent rates will rise, fall or re­main the same de­pends on how effec­tive AI is when the mil­i­tary de­cides to adopt it for a given bat­tlefield task. In­stead of try­ing to perform guess­work on how satis­fac­tory or un­satis­fac­tory we per­ceive cur­rent AI sys­tems to be, we should start with the fact that AI will only be adopted by mil­i­taries if it has a suffi­cient level of bat­tlefield com­pe­tence. For in­stance, it must be able to dis­crim­i­nate friendly uniforms from en­emy uniforms so that it doesn’t com­mit fra­t­ri­cide. If a ma­chine can do that as well as a hu­man can, then it could similarly dis­t­in­guish en­emy uniforms from civilian clothes. Since the mil­i­tary won’t adopt a sys­tem that’s too crude to dis­t­in­guish friendlies from en­e­mies, it won’t adopt one that can’t dis­t­in­guish en­e­mies from civili­ans. In ad­di­tion, the mil­i­tary already has in­cen­tives to min­i­mize col­lat­eral dam­age; rules of en­gage­ment tend to be adopted by the mil­i­tary for mil­i­tary pur­poses rather than be­ing forced by poli­ti­ci­ans. Prevent­ing harm to civili­ans is com­monly in­cluded as an im­por­tant goal in Western mil­i­tary op­er­a­tions. Less pro­fes­sional mil­i­taries and au­to­cratic coun­tries tend to care less about col­lat­eral dam­age, but in those mil­i­taries, it’s all the more im­por­tant that their undis­ci­plined sol­diers be re­placed by more con­trol­lable LAWs which will be less prone to abuses and ab­ject care­less­ness.

You could worry that LAWs will have a spe­cific com­bi­na­tion of ca­pa­bil­ities which ren­ders them bet­ter for mil­i­tary pur­poses while be­ing worse all-things-con­sid­ered. For in­stance, maybe a ma­chine will be bet­ter than hu­mans at nav­i­gat­ing and win­ning on the bat­tlefield, but at the cost of worse ac­ci­dents to­wards civili­ans. A spe­cific case of this is the preva­lent idea that AI doesn’t have enough “com­mon sense” to be­have well at tasks out­side its nar­row purview. But it’s not clear if this ar­gu­ment works be­cause com­mon sense is cer­tainly im­por­tant from a warfight­ing point of view; con­versely, it may not be one of the top fac­tors that pre­vents sol­diers from caus­ing ac­ci­dents. Maybe AI’s amenabil­ity to up­dates, its com­pu­ta­tional ac­cu­racy, or its great mem­ory will make it very good at avoid­ing ac­ci­dents, with its lack of com­mon sense be­ing the limit­ing fac­tor pre­vent­ing it from hav­ing bat­tlefield util­ity.

Even if AI does lead to an in­crease in ac­ci­dents, this could eas­ily be out­weighed by the num­ber of sol­diers saved by be­ing au­to­mated off the bat­tlefield. Con­sider the in­tro­duc­tion of Pha­lanx CIWS. This au­to­mated weapon sys­tem has kil­led two peo­ple in ac­ci­dents. Mean­while, at least four Amer­i­can CIWS-equipped ships have suffered fatal in­ci­dents: the USS Stark (struck by an Ira­nian mis­sile in 1987), the USS Cole (dam­aged by a suicide bomb in 2000), the USS Fitzger­ald (col­lided with a trans­port in 2017), and the USS John S. McCain (col­lided with a trans­port in 2017). There were 71 deaths from these in­ci­dents out of ap­prox­i­mately a thou­sand to­tal crew­men. Re­plac­ing the ships’ nine Pha­lanx sys­tems with, say, eigh­teen (less effec­tive) man­ual au­to­can­non tur­rets would have added thirty-six gun crew, who would suffer 2.5 statis­ti­cal deaths given the ac­ci­dent mor­tal­ity rate on these four ships. This still does not in­clude pos­si­ble deaths from minor in­ci­dents and deaths from in­ci­dents in for­eign navies which also use Pha­lanx. Thus, it ap­pears that the in­tro­duc­tion of Pha­lanx is more likely to have de­creased ac­ci­dent deaths.

Over­all, it’s not clear if ac­ci­den­tal ca­su­alties will in­crease or de­crease with au­toma­tion of the bat­tlefield. In the very long run how­ever, ma­chines can be ex­pected to sur­pass hu­mans in many di­men­sions, in which case they will ro­bustly re­duce the risk of ac­ci­dents. If we de­lay de­vel­op­ment and re­in­force norms and laws against LAWs now, it may be more difficult to de­velop and de­ploy such clearly benefi­cial sys­tems in the longer-run fu­ture.

Ethics

It’s not im­me­di­ately clear whether LAWs would be more or less eth­i­cal than hu­man sol­diers. Un­like sol­diers, they would not be at risk of in­de­pen­dently and de­liber­ately com­mit­ting atroc­i­ties. Their eth­i­cal views could be de­cided more firmly by the gov­ern­ment based on the pub­lic in­ter­est, as op­posed to hu­mans who lack eth­i­cal trans­parency and do not change their minds eas­ily. They could be built free of vices like anger, libido, ego­tism, and fa­tigue which can lead to un­eth­i­cal be­hav­ior.

One down­side is that their be­hav­ior could be more brit­tle in un­usual situ­a­tions where they have to make an eth­i­cal choice be­yond the scope of reg­u­lar laws and pro­ce­dures, but these are out­lier sce­nar­ios which are minor in com­par­i­son to other is­sues (Scharre 2018), and of­ten it’s not clear what the right ac­tion is in those cases any­way. If the right ac­tion in a situ­a­tion will be un­clear, then we shouldn’t be wor­ried about what ac­tion will be taken. Mo­ral out­comes mat­ter, not the mere pro­cess of moral de­liber­a­tion. If there is a moral dilemma where peo­ple in­tractably dis­agree about the right thing to do, then set­tling the mat­ter with even a coin flip would be just as good as ran­domly pick­ing a per­son to use their per­sonal judg­ment.

LAWs in­crease the psy­cholog­i­cal dis­tance from kil­ling (Scharre 2018), po­ten­tially re­duc­ing the level of re­straint against du­bi­ous kil­ling. How­ever, it’s a false com­par­i­son to mea­sure the psy­cholog­i­cal dis­tance of a in­fantry­man against the psy­cholog­i­cal dis­tance of a com­man­der em­ploy­ing LAWs. The in­fantry­man will be re­placed by the LAW which has no psy­chol­ogy at all. Rather, we should com­pare the psy­cholog­i­cal dis­tance of a com­man­der us­ing in­fantry­men against the psy­cholog­i­cal dis­tance of a com­man­der us­ing LAWs. In this case, there will be a similar level of psy­cholog­i­cal dis­tance. In any case, in­creased psy­cholog­i­cal dis­tance from kil­ling can be benefi­cial for the men­tal health of mil­i­tary per­son­nel.

Some ob­ject that it is difficult or im­pos­si­ble to reach a philo­soph­i­cal con­sen­sus on the eth­i­cal is­sues faced by AI sys­tems. How­ever, this car­ries no weight as it is equally im­pos­si­ble to reach a philo­soph­i­cal con­sen­sus on how hu­mans should be­have ei­ther. And in prac­tice, le­gal/​poli­ti­cal com­pro­mises, per­haps in the form of moral un­cer­tainty, can be im­ple­mented de­spite the lack of philo­soph­i­cal con­sen­sus.

AI sys­tems could be bi­ased in their eth­i­cal eval­u­a­tions, but this will gen­er­ally be less sig­nifi­cant than bias among hu­mans.

Over­all, it’s not re­ally clear whether LAWs will be­have more or less eth­i­cally than hu­mans, though I would ex­pect them to be a lit­tle more eth­i­cal be­cause of the cen­tral­ized reg­u­la­tion and gov­ern­ment stan­dards. But in the long run, AI be­hav­ior can be­come more eth­i­cally re­li­able than hu­man be­hav­ior on the bat­tlefield. If we de­lay de­vel­op­ment and re­in­force norms and laws against LAWs now, it may be more difficult to de­velop and de­ploy such clearly benefi­cial sys­tems in the longer-run fu­ture.

Escalation

There is a worry that au­tonomous weapons will make tense mil­i­tary situ­a­tions be­tween non-bel­liger­ent na­tions less sta­ble and more es­ca­la­tory, prompt­ing new out­breaks of war (Scharre 2018). It’s not clear if ac­ci­den­tal first shots will be­come more fre­quent: LAW be­hav­ior could be more tightly con­trol­led to avoid mis­takes and in­ci­dents. How­ever, they do have faster re­sponse times and more brit­tle­ness in chaotic en­vi­ron­ments, which could en­able quicker es­ca­la­tion of vi­o­lent situ­a­tions, analo­gous to a “flash crash” on Wall Street.

How­ever, the flip side of this is that hav­ing fewer hu­mans pre­sent in these kinds of situ­a­tions im­plies that out­breaks of vi­o­lence will have less poli­ti­cal sting and there­fore more chance of end­ing with a peace­ful solu­tion. A coun­try can always be satis­fac­to­rily com­pen­sated for lost ma­chin­ery through fi­nan­cial con­ces­sions; the same can­not be said for lost sol­diers. Flash fire­fights of­ten won’t lead to war in the same sense that flash crashes of­ten don’t lead to re­ces­sions.

A re­lated benefit of rapid re­ac­tions is that they make it more risky for the en­emy to launch a sur­prise strike. When a state can’t count on hu­man lethargy to in­hibit en­emy re­ac­tions, it’s go­ing to think twice be­fore ini­ti­at­ing a war.

Over­all, it seems that es­ca­la­tion prob­lems will get worse, but these coun­ter­vailing con­sid­er­a­tions re­duce the mag­ni­tude of the worry.

Hacking

One could ar­gue that the ex­is­tence of LAWs makes it pos­si­ble for hack­ers such as an un­friendly ad­vanced AI agent to take charge of them and use them for bad ends. How­ever, in the long run a very ad­vanced AI sys­tem would have many tools at its dis­posal for cap­tur­ing global re­sources, such as so­cial ma­nipu­la­tion, hack­ing, nan­otech­nol­ogy, biotech­nol­ogy, build­ing its own robots, and things which are be­yond cur­rent hu­man knowl­edge. A su­per­in­tel­li­gent agent would prob­a­bly not be limited by hu­man pre­cau­tions; mak­ing the world as a whole less vuln­er­a­ble to ASI is not a com­monly sug­gested strat­egy for AI safety since we as­sume that once it gets onto the in­ter­net then there’s not re­ally any­thing that can be done to stop it. Plus, it’s wrong to as­sume that an AI sys­tem with bat­tlefield ca­pa­bil­ities, which is just as good at gen­eral rea­son­ing as the hu­mans it re­placed, would be vuln­er­a­ble to a sim­ple hack or takeover. If a ma­chine can perform com­plex in­fer­ence re­gard­ing mil­i­tary rules, its du­ties on the bat­tlefield, and the ac­tions it can take, then it’s likely to have plenty of re­sis­tance and skep­ti­cism about du­bi­ous com­mands.

The more rele­vant is­sue is the near term. In this case, au­tonomous tech­nolo­gies might not be any less se­cure than many other tech­nolo­gies which we cur­rently rely on. A fighter jet has elec­tron­ics, as does a power plant. Lots of things can the­o­ret­i­cally be hacked, and hack­ing an LAW to cause some dam­age isn’t nec­es­sar­ily any more de­struc­tive than hack­ing in­fras­truc­ture or a manned ve­hi­cle. Re­place the GPS co­or­di­nates in a JDAM bomb pack­age and you’ve already figured out how to use our ex­ist­ing equip­ment to de­liber­ately cause many civilian ca­su­alties. Re­mote-con­trol­led robots are prob­a­bly eas­ier to hack than LAWs, be­cause they rely on an­other chan­nel send­ing them or­ders. Things like this don’t hap­pen of­ten, how­ever. Also, the mil­i­tary has perfectly good in­cen­tives on its own to avoid us­ing sys­tems which are vuln­er­a­ble to hack­ing. By far the largest risk with hack­ing is en­emy ac­tors hack­ing weapons for their own pur­poses – some­thing which the mil­i­tary will be ad­e­quately wor­ried about avoid­ing.

Trust

Lyall and Wil­son (2009), ex­am­in­ing both a com­pre­hen­sive two-cen­tury data set and a close case study of two U.S. Army di­vi­sions in Iraq, found that mech­a­niza­tion wors­ens the prospects for effec­tive coun­ter­in­sur­gency. Put­ting troops be­hind wheels and ar­mor alienates the pop­u­lace and in­creases op­po­si­tion to the mil­i­tiary in­volve­ment. And I think this can be a case of per­verse in­cen­tives rather than just a mil­i­tary er­ror be­cause, as noted above, liberal democ­ra­cies are loss-averse in pro­tracted coun­ter­in­sur­gen­cies. It takes a lot of poli­ti­cal willpower to suffer an ad­di­tional twenty Amer­i­can troop deaths even if it means that the cam­paign will be wrapped up sooner with one hun­dred fewer Iraqi deaths.

I pre­sume that if LAWs are used for coun­ter­in­sur­gency op­er­a­tions then they are very likely to dis­play this same effect. They could be overused to save sol­dier lives in the short term with worse re­sults for the lo­cals and for the long term.

How­ever, run­ning LAWs in COIN op­er­a­tions in close prox­im­ity to civili­ans puts them in one of the most com­plex and difficult en­vi­ron­ments for an AI to nav­i­gate safely and effec­tively. I don’t think it makes mil­i­tary sense to do this un­til AI is very com­pe­tent. In ad­di­tion, robotic ve­hi­cles could eas­ily be used in coun­ter­in­sur­gen­cies if LAWs are un­available, and they will cause similarly nega­tive per­cep­tions. (Lo­cals may not no­tice the differ­ence. I vaguely re­call a story of US in­fantry in the in­va­sion of Afghanistan who, with all their body ar­mor and Oak­leys, were ac­tu­ally thought by the iso­lated villagers to be some kind of robotic or oth­er­wor­ldly vis­i­tors. I don’t re­mem­ber the ex­act de­tails but you get the point.) So while I would definitely urge ex­tra cau­tion about us­ing LAWs for coun­ter­in­sur­gency, I con­sider this only a small rea­son against the de­vel­op­ment of LAWs in gen­eral.

Conclusion

The weight of ev­i­dence does not show that LAWs will dis­play sig­nifi­cantly more ac­ci­dents or un­eth­i­cal be­hav­ior, in fact they’re more likely to be safer be­cause of cen­tral­ized stan­dards and con­trol.

Hack­ing vuln­er­a­bil­ity will be in­creased, but this is not an ex­ter­nal­ity to con­ven­tional mil­i­tary de­ci­sion mak­ing.

The main risk is that LAWs could cause con­flicts to start and es­ca­late more quickly. How­ever, the au­toma­tion of mil­i­taries would help pre­vent such an in­ci­dent from be­com­ing a ca­sus belli, and could provide a more pre­dictable de­ter­rent against ini­ti­at­ing con­flicts.

LAWs could worsen our coun­ter­in­sur­gency efforts, but this is a rel­a­tively small is­sue since they are in­her­ently un­suited for the pur­pose, and the al­ter­na­tive might be equally prob­le­matic robots any­way.

How will LAW de­vel­op­ment af­fect gen­eral AI de­vel­op­ment?

Arms races

You might say that LAWs will prompt an in­ter­na­tional arms race in AI. Arms races don’t nec­es­sar­ily in­crease the risk of war; in fact they can de­crease it (In­tril­i­ga­tor and Brito 1984). We should per­haps hold a weak over­all pre­sump­tion to avoid arms races. It’s more ap­par­ent that an AI arms race could in­crease the risk of a catas­tro­phe with mis­al­igned AGI (Arm­strong et al 2016) and would be ex­pen­sive. But faster AI de­vel­op­ment will help us avoid other kinds of risks un­re­lated to AI, and it will ex­pe­d­ite hu­man­ity’s progress and ex­pan­sion.

More­over, no mil­i­tary is cur­rently at the cut­ting edge of AI or ma­chine learn­ing (as far as we can tell). The top re­search is done in academia and the tech in­dus­try, at least in the West. Fi­nally, if there is in fact a se­cu­rity dilemma re­gard­ing AI weaponry, then ac­tivism to stop it is un­likely to be fruit­ful. The liter­a­ture on the effi­cacy of arms con­trol in in­ter­na­tional re­la­tions is rather mixed; it seems to work only as long as the weapons are not ac­tu­ally vi­tal for na­tional se­cu­rity.

Secrecy

AI de­vel­op­ment for mil­i­tary pur­poses would likely be car­ried out in se­cret, mak­ing it more difficult for states to co­or­di­nate and share in­for­ma­tion about its de­vel­op­ment. How­ever, that doesn’t pre­vent such co­or­di­na­tion and shar­ing from oc­cur­ring with non-mil­i­tary AI. More­over, Arm­strong et al (2016) showed that un­cer­tainty about AI ca­pa­bil­ities ac­tu­ally in­creases safety in an arms race to AGI.

Safety research

A par­tic­u­lar benefit of mil­i­taries in­vest­ing in AI re­search is that their sys­tems are fre­quently safety-crit­i­cal and tasked with tak­ing hu­man life, and there­fore they are built to higher-than-usual stan­dards of ver­ifi­ca­tion and safety. Build­ing and de­ploy­ing AI sys­tems in tense situ­a­tions where many ethics pan­els and in­ter­na­tional watch­dogs are voic­ing pub­lic fears is a great way to im­prove the safety of AI tech­nol­ogy. It could lead to col­lab­o­ra­tion, test­ing and lob­by­ing for safety and ethics stan­dards, that can be ap­plied to many types of AI sys­tems el­se­where in so­ciety.

Conclusion

LAW de­vel­op­ment could lead to an arms race, pos­si­bly with a lot of un­cer­tainty about other coun­tries’ ca­pa­bil­ities. This would be costly and would prob­a­bly in­crease the risk of mis­al­igned AGI. How­ever, it might lead to faster de­vel­op­ment of benefi­cial AI tech­nol­ogy, not only in terms of ba­sic com­pe­tence but also in terms of safety and re­li­a­bil­ity.

How will LAWs change do­mes­tic con­flict?

Crime and terrorism

Some fear that LAWs could be cheaply used to kill peo­ple with­out leav­ing much ev­i­dence be­hind. How­ever, au­tonomous tar­get­ing is clearly worse for an in­ten­tional homi­cide; it’s cheaper, eas­ier and more re­li­able if you pi­lot the drone your­self. A drone with a firearm was built and widely pub­li­cized in 2015 but, as far as I know, there have been no drone mur­ders. Au­ton­omy could be more use­ful in the fu­ture if drone crime be­comes such a threat as to prompt peo­ple to in­stall jam­ming de­vices in their homes, but then it will be very difficult for the drone to kill some­one in­side or near their house (which can also be guarded with sim­ple im­prove­ments like sen­sors and translu­cent win­dows).

Peo­ple similarly worry that as­sas­si­na­tions will be made eas­ier by drones. Au­ton­omy could already be use­ful here be­cause top poli­ti­cal lead­ers are some­times pro­tected with jam­ming de­vices. How­ever, when a poli­ti­cian is trav­el­ing in ve­hi­cles, speak­ing in­doors or be­hind bul­let-re­sis­tant glass, and pro­tected by se­cu­rity teams, it be­comes very difficult for a drone to harm them. Small sim­ple drones with tox­ins or ex­plo­sives can be blocked by se­cu­rity guards. One drone as­sas­si­na­tion at­tempt was made in 2018; it failed (though most as­sas­si­na­tion at­tempts do any­way). In the worst case, poli­ti­ci­ans can avoid pub­lic events en­tirely, only com­mu­ni­cat­ing from more se­cure lo­ca­tions to cam­eras and tighter au­di­ences; this would be un­for­tu­nate but not a ma­jor prob­lem for gov­ern­ment func­tion­ing.

The cheap­ness of drones makes them a po­ten­tial tool for mass kil­ling. Au­tonomous tar­get­ing would be more use­ful if some­one is us­ing large num­bers of drones. How­ever, it’s not clear how ter­ror­ists would get their hands on mil­i­tary tar­get­ing soft­ware – es­pe­cially be­cause it’s likely to be bound up with mil­i­tary-spe­cific elec­tronic hard­ware. We don’t see tools like the Army Bat­tle Com­mand Sys­tem be­ing used by Syr­ian in­sur­gents; we don’t see pirated use of for­eign mil­i­tary soft­ware in gen­eral. The same thing is likely to ap­ply to mil­i­tary AI soft­ware. And the idea that the mil­i­tary would di­rectly con­struct mass mur­der drones and then lose them to ter­ror­ists is non­sense. More­over, civilian tech­nol­ogy could eas­ily be re­pur­posed for the same ends. So, while drone ter­ror­ism is a real risk, ban­ning LAWs from the mil­i­tary is un­likely to do any­thing to pre­vent it. You might be able to slightly slow down some of the gen­eral progress in small au­tonomous robots, per­haps by ban­ning more tech­nol­ogy be­sides LAWs, but that would in­hibit the de­vel­op­ment of many benefi­cial tech­nolo­gies as well.

In ad­di­tion, there are quite a few coun­ter­mea­sures which could sub­stan­tially miti­gate the im­pact of drone ter­ror­ism: both sim­ple pas­sive mea­sures, and ac­tive defenses from equally so­phis­ti­cated au­tonomous sys­tems.

Mean­while, LAWs and their as­so­ci­ated tech­nolo­gies could be use­ful for stop­ping crime and ter­ror­ism.

Sup­press­ing rebellions

LAWs (or less-than-lethal AI weapons us­ing similar tech­nol­ogy) could be used for in­ter­nal polic­ing to stifle re­bel­lions. Their new mil­i­tary ca­pa­bil­ity likely won’t change much, as rebels could ob­tain their own AI and rebels have usu­ally taken asym­met­ric ap­proaches with poli­ti­cal and tac­ti­cal ex­ploits to cir­cum­vent their dearth of heavy or­ga­nized com­bat power. And, per Liall and Wil­son (2009), us­ing mech­a­nized forces against an in­sur­gency pro­vokes more back­lash and in­creases the chance that the coun­ter­in­sur­gency will fail; by similar logic, LAWs prob­a­bly wouldn’t be pro­duc­tive in re­in­forc­ing most frag­ile states un­less they were sig­nifi­cantly more ca­pa­ble than hu­mans.

The rele­vant differ­ence is that the AI could be made with blind obe­di­ence to the state or to the leader, so it could be com­manded to sup­press re­bel­lions in cases where hu­man po­lice­men and sol­diers would stand down. In a democ­racy like the US, the peo­ple might de­mand that LAWs be built with ap­pro­pri­ate safe­guards to re­spect the Con­sti­tu­tion and re­ject un­lawful or­ders no mat­ter what the Pres­i­dent de­mands, or per­haps with a rule against be­ing used do­mes­ti­cally at all. But in au­to­cratic coun­tries, the state might be able to de­ploy LAWs with­out such re­stric­tions.

In 2018, the Su­danese Revolu­tion and the Ar­me­nian Revolu­tion were both aided by mil­i­tary per­son­nel break­ing away from the gov­ern­ment. And in 2019, Bo­li­vian pres­i­dent Evo Mo­rales was made to step down by the po­lice and mil­i­tary. There­fore, this is a real is­sue.

LAWs might have an­other effect mak­ing effec­tive rev­olu­tions more com­mon by forc­ing peo­ple to stick to peace­ful meth­ods which are, per Chenoweth (2011), more suc­cess­ful. But this is de­bat­able be­cause it as­sumes that protest and re­bel­lion move­ments be­have ir­ra­tionally. Also, peace­ful re­bel­lions in gen­eral might be less suc­cess­ful if they no longer pose a cred­ible threat of vi­o­lence.

LAWs might make rev­olu­tions eas­ier by re­duc­ing the size of the mil­i­tary. Mili­tary per­son­nel who are trained on weapons sys­tems like ar­tillery and war­ships can still be used in a sec­ondary ca­pac­ity as se­cu­rity when the use of their pow­er­ful weapons would be in­ap­pro­pri­ate. Au­tonomous weapons do not pos­sess this flex­i­bil­ity. You can­not take a C-RAM air defense sys­tem and tell it to defend the gates of the pres­i­den­tial palace against a crowd of demon­stra­tors, but you can do this with the gun crew of some man­ual sys­tem. The con­ver­gence of mil­i­taries to­wards more and more com­plex force-on-force ca­pa­bil­ities could lead to them be­com­ing less use­ful for main­tain­ing state sta­bil­ity.

Fi­nally, as noted in a pre­vi­ous sec­tion, the use of LAWs to bolster state sta­bil­ity against the pop­u­lace might pro­voke more aliena­tion and back­lash. If au­to­crats make er­rant over-use of LAWs to pro­tect their regimes, as many states have done with reg­u­lar mech­a­nized forces, they could in­ad­ver­tently weaken them­selves. Other­wise, smart states will seek to avoid be­ing forced to de­ploy LAWs against their own peo­ple.

So there is only weak over­all rea­son to be­lieve that LAWs will make rev­olu­tions more difficult. Also, it’s not clear if mak­ing re­bel­lions harder is a bad thing. Many anti­gov­ern­ment move­ments just make things worse. For in­stance, cor­rupt Iraqi and Afghan se­cu­rity forces with Is­lamist sym­pa­thies have done ma­jor dam­age to their coun­tries dur­ing the re­cent con­flicts.

On the other hand, if re­bel­lion be­comes more difficult, then AI weapons could also change state be­hav­ior: the state might act in a more au­to­cratic man­ner since it feels more se­cure. I wouldn’t think that this is a ma­jor prob­lem in a liberal democ­racy, but there are differ­ing opinions on that – and not ev­ery coun­try is a liberal democ­racy, nor are they go­ing to change any­time soon (Mounk and Foa 2018). In an ex­treme case, drone power might lead to heavy so­cial strat­ifi­ca­tion and au­toc­racy.

But we should re­mem­ber that many other things may change in this equa­tion. So­cial me­dia has weak­ened state sta­bil­ity in re­cent years, plau­si­bly to the detri­ment of global well-be­ing. Fu­ture tech­nolo­gies may in­crease pop­u­lar power even fur­ther.

In sum­mary, there are weak rea­sons to be­lieve that LAWs will in­crease state sta­bil­ity, pacify poli­ti­cal in­sur­rec­tion­ists, and make state be­hav­ior more au­to­cratic. It’s a com­plex is­sue and it is not clear if in­creased state sta­bil­ity will be good or bad. How­ever, there is a note­wor­thy tail risk of se­ri­ous au­toc­racy.

Do­mes­tic democide

De­mo­cide is the the in­ten­tional kil­ling of un­armed peo­ple by gov­ern­ment agents act­ing in their au­thor­i­ta­tive ca­pac­ity and pur­suant to gov­ern­ment policy or high com­mand. LAWs could make this is­sue worse by ex­e­cut­ing gov­ern­ment or­ders with­out much ques­tion, in a con­text where hu­mans might re­fuse to obey the or­der. On the flip side of the coin, LAWs could faith­fully fol­low the law, and would not be vuln­er­a­ble to the ex­cesses of venge­ful hu­man per­son­nel. LAWs would not have a de­sire to com­mit rape, which has been a ma­jor com­po­nent of the suffer­ing in­flicted by de­mo­ci­dal ac­tors.

I don’t have any spe­cial level of fa­mil­iar­ity with the his­to­ries of the worst de­mo­cides, such as the Great Leap For­ward, the Great Purge, the Nazi Holo­caust, and the bru­tal­ities of the Sino-Ja­panese War. But from what I’ve learned, the agents of evil were nei­ther re­strained by much gov­ern­ment over­sight nor likely to ob­ject much to their as­sign­ments, which sug­gests that re­plac­ing them with LAWs wouldn’t change much. But there were many small acts of op­por­tunis­tic bru­tal­ity that weren’t nec­es­sary for achiev­ing gov­ern­ment ob­jec­tives. States like Stal­inist USSR wouldn’t have built LAWs with spe­cial safe­guards against com­mit­ting ter­rible ac­tions, but I do think that the LAWs wouldn’t have as much drive for gra­tu­itous bru­tal­ity as was pre­sent among some of the hu­man sol­diers and po­lice. A de­mo­ci­dal state could pro­gram LAWs to want to cause gra­tu­itous suffer­ing to vic­tims, but it would be more poli­ti­cally ad­van­ta­geous (at least on the in­ter­na­tional stage) to avoid do­ing such a thing. Con­versely, if it wanted to perform gra­tu­itous re­pres­sion, it could always rely on hu­man per­son­nel to do it.

Smaller-scale, more com­plex in­stances of de­mo­cide have more of­ten been char­ac­ter­ized by groups act­ing wrongly on the ba­sis of their racial or poli­ti­cal prej­u­dices, or­thog­o­nal or some­times con­tra­dic­tory to the guidance of the state. This pat­tern oc­curs fre­quently in eth­nic clashes in places like the Mid­dle East and the Cau­ca­sus. And when char­ac­ter­iz­ing the risks which mo­ti­vate the ac­tions of an­tifa in the United States, Gillis (2017) writes “The dan­ger isn’t that the KKK per­suades a hun­dred mil­lion peo­ple to join it and then wins elec­tions and in­sti­tutes fas­cist rule. That’s a straw­man built on in­cred­ibly naive poli­ti­cal no­tions. The dan­ger is that the fas­cist fringe spreads ter­ror, pushes the over­ton win­dow to make hy­per-na­tion­al­ism and racism ac­cept­able in pub­lic, and grad­u­ally de­taches the ac­tual power of the state (the po­lice and their guns) from the more re­served liberal le­gal ap­para­tus sup­pos­edly con­strain­ing them.” In this sort of con­text, re­plac­ing hu­man polic­ing with ma­chines which are more di­rectly amenable to pub­lic over­sight would be a pos­i­tive de­vel­op­ment.

So over­all, I would ex­pect states with LAWs to be a bit less de­mo­ci­dal.

Conclusion

Their im­pact on re­bel­lions and state sta­bil­ity is com­plex and there is a sig­nifi­cant risk that they will make things worse by in­creas­ing au­toc­racy. On the other hand, there is also a sig­nifi­cant chance that they will make things bet­ter by re­duc­ing state frag­ility. This ques­tion partly, but not en­tirely, hinges on whether it’s a good thing for armed re­bel­lion to be­come more difficult. I don’t think ei­ther is more likely than the other, but the pos­si­ble risks seem more se­vere in mag­ni­tude than the pos­si­ble benefits.

Re­plac­ing hu­mans sol­diers with LAWs will prob­a­bly re­duce the risks of de­mo­cide.

For the time be­ing, I would sup­port a rule where au­tonomous weapons must re­fuse to tar­get the coun­try’s own peo­ple no mat­ter what; this should be pretty fea­si­ble given that the mere idea of us­ing fa­cial recog­ni­tion in Amer­i­can polic­ing is very con­tro­ver­sial.

Indi­rect effects of cam­paign­ing against kil­ler robots

It’s one thing to dis­cuss the the­ory of what LAW policy should be, but in prac­tice we should also look at the so­cial re­al­ities of ac­tivism. Since peo­ple such as those work­ing at FLI have taken a very pub­lic-fac­ing ap­proach in pro­vok­ing fear of AI weapons (ex­am­ple) we should view all facets of the gen­eral pub­lic hos­tility to LAWs as po­ten­tial con­se­quences of such efforts.

Stymy­ing the de­vel­op­ment of non­lethal technology

There is quite some irony to the fact that the same com­mu­nity of AI re­searchers and com­men­ta­tors which at­tempted to dis­cour­age frank dis­cus­sion of AGI risks by con­demn­ing alarmism and ap­peal­ing to the benefits of tech­nolog­i­cal progress, just be­cause of some un­in­tended sec­ondary re­ac­tions to the the­ory of AGI risk, has quickly con­demned mil­i­tary AI de­vel­op­ment in a much more ex­plic­itly alarmist man­ner and made di­rect moves to re­strict tech­nolog­i­cal de­vel­op­ment. This fear­mon­ger­ing has cre­ated costs spilling over into other tech­nolo­gies which are more de­sir­able and defen­si­ble than LAWs.

Pro­ject Maven pro­vided tar­get­ing data for coun­tert­er­ror­ist and coun­ter­in­sur­gent strikes which were man­aged by full hu­man over­sight, but Google quit the pro­ject af­ter mass em­ployee protest. Eleven em­ploy­ees ac­tu­ally re­signed in protest, an ac­tion which can gen­er­ally harm the com­pany and its var­i­ous pro­jects. Google also kil­led its tech­nol­ogy ethics board af­ter in­ter­nal out­rage at its com­po­si­tion, which was pri­mar­ily due to a gen­der poli­tics con­tro­versy but par­tially ex­ac­er­bated by the in­clu­sion of drone com­pany CEO Dyan Gibbens; the ethics board would not have had much di­rect power but could have still played an im­por­tant role in fos­ter­ing pos­i­tive con­ver­sa­tions and trust for the in­dus­try in a time of poli­ti­cal po­lariza­tion. 57 sci­en­tists called for a boy­cott of a South Korean uni­ver­sity be­cause it built a re­search cen­ter for mil­i­tary AI de­vel­op­ment, but the uni­ver­sity re­jected the no­tion that its work would in­volve LAWs.

Dispro­por­tionately wors­en­ing the strate­gic po­si­tion of the US and allies

The re­al­ity of this type of ac­tivism falls far short of the ideal where coun­tries across the world are dis­cour­aged from build­ing and adopt­ing LAWs. Rather, this ac­tivism is mostly con­cen­trated in Western liberal democ­ra­cies. The prob­lem is that these are pre­cisely the states for which in­creased mil­i­tary power is im­por­tant. Pro­tect­ing Eastern Europe and Ge­or­gia from Rus­sian mil­i­tary in­ter­ven­tions is (or would have been) a good goal. Sup­press­ing Is­lamist in­sur­gen­cies in the Mid­dle East is a good goal. Pro­tect­ing Taiwan and other coun­tries in the Pa­cific Rim from Chi­nese ex­pan­sion­ism is a good goal.

And the West does not have a pre­pon­der­ance of mil­i­tary power. We are ca­pa­ble of win­ning a war against Rus­sia, but the po­ten­tial costs are high enough to al­low Rus­sia to be quite provoca­tive with­out ma­jor reper­cus­sion. More im­por­tantly, China might pre­vail to­day in a mil­i­tary con­flict with the US over an is­sue like the sta­tus of Taiwan. The con­se­quences could in­clude de­struc­tion from the war it­self, height­ened ten­sion among oth­ers in the re­gion, loss of lo­cal free­doms and democ­racy (con­sider Hong Kong), de­liber­ate re­pres­sion of dis­si­dent iden­tities (con­sider the Uyghurs and Falun Gong), re­duced pros­per­ity due to Chi­nese state ven­tures sup­plant­ing the pri­vate sec­tor (Boe­ing et al 2015, Fang et al 2015) or what­ever other eco­nomic con­ces­sions China foists upon its neigh­bors, loss of strate­gic re­sources and geopoli­ti­cal ex­pan­sion which would give China more power to al­ter world af­fairs. Chi­nese cit­i­zens and elites are hawk­ish and xeno­pho­bic (Chen Weiss 2019, Zhang 2019). Con­flict may be in­evitable due to Thucy­dides’ Trap or Chi­nese de­mands for global cen­sor­ship. While some other Indo-Pa­cific coun­tries are also guilty of bad do­mes­tic poli­cies (like In­dia and In­done­sia), they are democ­ra­cies which have more open­ness and prospects for co­op­er­a­tion and im­prove­ment over time. Even if US mil­i­tary spend­ing is not worth the money, it’s still the case that Amer­i­can strength is prefer­able (hold­ing all else equal) to Amer­i­can weak­ness.

The ma­jor­ity of for­eign policy thinkers be­lieve (though not unan­i­mously) that it’s good for the world or­der if the US main­tains com­mit­ments to defend al­lies. The mixed at­ti­tudes of Sili­con Valley and academia to­wards mil­i­tary AI de­vel­op­ment in­terfere with this goal.

If you only make care­ful poli­ti­cal moves to­wards an in­ter­na­tional AI mora­to­rium then you aren’t per­pet­u­at­ing the prob­lem, but if you spread pop­u­lar fear and op­po­si­tion to AI weapons (es­pe­cially among coders and com­puter sci­ence academia) then it is a differ­ent story. In fact, you might even be wors­en­ing the prospects for in­ter­na­tional arms limi­ta­tions be­cause China and Rus­sia could be­come more con­fi­dent in their abil­ity to match or out­pace Western mil­i­tary AI de­vel­op­ment when they can see that our in­ter­nal frac­tures are limit­ing our tech­nolog­i­cal ca­pac­i­ties. In that case we would have lit­tle di­rect lev­er­age to get them to com­ply with an AI weapons ban.

Safety and ethics standards

If a ban is at­tempted in vain, the cam­paign may nev­er­the­less lead to more rigor­ous stan­dards of ethics and safety in mil­i­tary AI, as a strong sig­nal is sent against the dan­gers of AI weapons. Ad­di­tion­ally, ban­ning LAWs could in­spire greater mo­ti­va­tions for safety and ethics in other AI ap­pli­ca­tions.

How­ever, if we com­pare anti-robot cam­paign­ing against the more rea­son­able al­ter­na­tive – di­rectly ad­vo­cat­ing for good stan­dards of ethics and safety – then it will have less of an im­pact. Ad­di­tion­ally, the idea of a ban may dis­tract peo­ple from ac­cept­ing the com­ing tech­nol­ogy and do­ing the nec­es­sary le­gal and tech­ni­cal work to en­sure ethics and safety.

Conclusion

The cur­rent phe­nomenon of liberal Western­ers con­demn­ing LAWs is bad for the AI in­dus­try and bad for the in­ter­ests of Amer­ica and most for­eign coun­tries. It might in­di­rectly lead to more ethics and safety in mil­i­tary AI.

Ag­gre­ga­tion and conclusion

Here are all the in­di­vi­d­ual con­sid­er­a­tions re­gard­ing AI use:

Here are the ma­jor cat­e­gories:

Here are my prin­ci­ple con­clu­sions:

1. LAWs are more likely to be a good de­vel­op­ment than a bad one, though there is quite a bit of un­cer­tainty and one could jus­tify be­ing neu­tral on the mat­ter. It is not jus­tified to ex­pend effort against the de­vel­op­ment of lethal au­tonomous weapons, as the pros do not out­weigh the cons.

2. If some­one still op­poses lethal au­tonomous weapons, they should fo­cus on di­rectly mo­ti­vat­ing steps to re­strict their de­vel­op­ment with an in­ter­na­tional treaty, rather than fo­ment­ing gen­eral hos­tility to LAWs in Western cul­ture.

3. The con­cerns over AI weapons should pivot away from ac­ci­dents and moral dilem­mas, to­wards the ques­tion of who would con­trol them in a do­mes­tic power strug­gle. This is­sue is both more im­por­tant and more ne­glected.

Ad­den­dum: is AI kil­ling in­trin­si­cally bet­ter or worse than man­ual kil­ling?

There is some philo­soph­i­cal de­bate on whether it’s in­trin­si­cally bad for some­one to be kil­led via robot rather than by a hu­man. One can sur­mise im­me­di­ately that ar­gu­ments against LAWs on this ba­sis are based on du­bi­ous ide­olo­gies which marginal­ize peo­ple’s ac­tual prefer­ences and well-be­ing in fa­vor of var­i­ous nonex­is­tent ab­stract prin­ci­ples. Such views are not com­pat­i­ble with hon­est con­se­quen­tial­ism which pri­ori­tizes peo­ple’s real feel­ings and de­sires. Law pro­fes­sor Ken An­der­son suc­cinctly trashed such ar­gu­ments by hy­poth­e­siz­ing a mil­i­tary offi­cial who says… “Listen, you didn’t have to be kil­led here had we fol­lowed IHL [in­ter­na­tional hu­man­i­tar­ian law] and used the au­tonomous weapon as be­ing the bet­ter one in terms of re­duc­ing bat­tlefield harm. You wouldn’t have died. But that would have offended your hu­man dig­nity… and that’s why you’re dead. Hope you like your hu­man dig­nity.”

Also, in con­trast to a philoso­pher’s ab­stract un­der­stand­ing of kil­ling in war, sol­diers do not kill af­ter some kind of pure pro­cess of eth­i­cal de­liber­a­tion which demon­strates that they are act­ing morally. Soldiers learn to fight as a me­chan­i­cal pro­ce­dure, with the mo­ti­va­tion of pro­tec­tion and suc­cess on the bat­tlefield, and their eth­i­cal stan­dard is to fol­low or­ders as long as those or­ders are lawful. In­fantry sol­diers of­ten don’t tar­get in­di­vi­d­ual en­e­mies; rather, they lay down sup­pres­sive fire upon en­emy po­si­tions and use weapons with a large area of effect, such as ma­chine-guns and grenades. They don’t think about each kill in eth­i­cal terms; in­stead they mainly rely on their Rules Of En­gage­ment, which is an al­gorithm that de­ter­mines when you can or can’t use deadly force upon an­other hu­man. Fur­ther­more, mil­i­tary op­er­a­tions in­volve the use of large sys­tems where there it is difficult to de­ter­mine a sin­gle per­son who has the re­spon­si­bil­ity for a ki­netic effect. In ar­tillery bom­bard­ments for in­stance, an officer in the field will or­der his ar­tillery ob­server to make a re­quest for sup­port or re­quest it him­self based on an ob­ser­va­tion of en­emy po­si­tions which may be in­formed by prior in­tel­li­gence anal­y­sis done by oth­ers. The re­quested co­or­di­nates are checked by a fire di­rec­tion cen­ter for avoidance of col­lat­eral dam­age and fra­t­ri­cide, and if ap­proved then the an­gle for firing is re­layed to the gun line. The gun crews carry out the re­quest. Per­mis­sions and pro­ce­dures for this pro­cess are laid out be­fore­hand. At no point does one per­son sit down and carry out philo­soph­i­cal de­liber­a­tion on whether the kil­ling is moral—it is just a se­ries of peo­ple do­ing their in­di­vi­d­ual jobs mak­ing sure that a bunch of things are be­ing done cor­rectly. The sys­tem as a whole looks just as grand and im­per­sonal as LAWs do. This fur­ther un­der­mines philo­soph­i­cal ar­gu­ments against LAWs.