Nothing Wrong With AI Weapons

By Kyle Bogosian

With all the recent worries over AI risks, a lot of people have raised fears about lethal autonomous weapons (LAWs) which take the place of soldiers on the battlefield. Specifically, in the news recently: Elon Musk and over 100 experts requested that the UN implement a ban. https://​​www.theguardian.com/​​technology/​​2017/​​aug/​​20/​​elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war

However, we should not dedicate efforts towards this goal. I don’t know if anyone in the Effective Altruist community has, but I have seen many people talk about it, and I have seen FLI dedicate nontrivial effort towards aggregating and publishing views against the use of LAWs. I don’t think we should be engaged in any of these activities to try and stop the implementation of LAWs, so first I will answer worries about the dangers of LAWs, and then I will point out a benefit.

The first class of worries is that it is morally wrong to kill someone with an LAW—specifically, that it is more morally wrong than killing someone in a different way. These nonconsequentialist arguments hold that the badness of death has something to do with factors other than the actual suffering and deprivation caused to the victim, the the victim’s family, or society at large. There is a lot of philosophical literature on this issue, generally relating to the idea that machines don’t have the same agency, moral responsibility, or moral judgement that humans do, or something of the sort. I’m going to mostly assume that people here aren’t persuaded by these philosophical arguments in the first place, because this is a lazy forum post, it would take a lot of time to read and answer all the arguments on this topic, and most people here are consequentialists.

I will say one thing though, which hasn’t been emphasized before and undercuts many of the arguments alleging that death by AI is intrinsically immoral: in contrast to the typical philosopher’s abstract understanding of killing in war, soldiers do not kill after some kind of pure process of ethical deliberation which demonstrates that they are acting morally. Soldiers learn to fight as a mechanical procedure, with the motivation of protection and success on the battlefield, and their ethical standard is to follow orders as long as those orders are lawful. Infantry soldiers often don’t target individual enemies; rather, they lay down suppressive fire upon enemy positions and use weapons with a large area of effect, such as machine-guns and grenades. They don’t think about each kill in ethical terms, they just memorize their Rules Of Engagement, which is an algorithm that determines when you can or can’t use deadly force upon another human. Furthermore, military operations involve the use of large systems where there it is difficult to determine a single person who has the responsibility for a kinetic effect. In artillery bombardments for instance, an officer in the field will order his artillery observer to make a request for support or request it himself based on an observation of enemy positions which may be informed by prior intelligence analysis done by others. The requested coordinates are checked by a fire direction center for avoidance of collateral damage and fratricide, and if approved then the angle for firing is relayed to the gun line. The gun crews carry out the request. Permissions and procedures for this process are laid out beforehand. At no point does one person sit down and carry out philosophical deliberation on whether the killing is moral—it is just a series of people doing their individual jobs making sure that a bunch of things are being done correctly. The system as a whole looks just as grand and impersonal as automatic weaponry does. (I speak from experience, having served on a field artillery unit.)

When someone in the military screws up and gets innocents killed, the blame often falls upon the commander who had improper procedures in place, not some individual who lost his moral compass. This implies that there is no problem with the attribution of responsibility for an LAW screwing up: it will likewise go to the engineer/​programmer who had improper procedures in place. So if killing by AI is immoral because of the lack of individual moral responsibility or the lack of moral deliberation, then killing by soldiers is not really any better and we shouldn’t care about replacing one with the other.

So, on we go to the consequential harms of LAWs.

First, there is the worry that it will make war more frequent, since nations don’t have to worry about losing soldiers, thereby increasing civilian deaths. This worry is attributed to unnamed experts in the Guardian article linked above. The logic here is a little bit gross, since it’s saying that we should make sure that ordinary soldiers like me die for the sake of the greater good of manipulating the political system and it also implies that things like body armor and medics should be banned from the battlefield, but I won’t worry about that here because this is a forum full of consequentialists and I honestly think that consequentialist arguments are valid anyway.

But the argument assumes that the loss of machines is not an equal cost to governments. If nations are indifferent to whether their militaries have soldiers or equally competent machines, then the machines have the same cost as soldiers, so there will be no difference in the expected utility of warfare. If machine armies are better than human soldiers, but also more expensive overall, and nations just care about security and economic costs, then it seems that nations will go to war less frequently, in order to preserve their expensive and better-than-human machines. However, you might believe (with good reason) that nations respond disproportionately to the loss of life on the battlefield, will go to great lengths to avoid it, and will end up with a system that enables them to go to war for less overall cost.

Well, in undergrad I wrote a paper on the expected utility of war (https://​​docs.google.com/​​document/​​d/​​1eGzG4la4a96ueQl-uJD03voXVhsXLrbUw0UDDWbSzJA/​​edit?usp=sharing). Assuming Eckhardt (1989)’s figure of the civilian casualty ratio (https://​​en.wikipedia.org/​​wiki/​​Civilian_casualty_ratio) being 50%, I found that proliferation of robots on the battlefield would only increase total casualties if nations considered the difference between the loss of human armies in wartime and the loss of comparable machines in wartime to be more than 1/​​3 of the total costs of war. Otherwise, robots on the battlefield would decrease total casualties. It seems to me like it could go either way, particularly with robot weapons having a more positive impact in wars of national security and a more negative impact in wars of foreign intervention and peacekeeping. While I can’t demonstrate that robotic weapons will reduce the total amount of death and destruction caused by war, there is not a clear case that robot weapons would increase total casualties, which is what you need to provide a reason for us to work against them.

There is also a flaw in the logic of this argument, which is the fact that it applies equally well to some other methods of waging war. In particular, having a human remotely control a military vehicle would have the same impact here as having a fully autonomous military vehicle. So if LAWs were banned, but robot technology turned out to be pretty good and governments wanted to protect soldiers’ lives, we would have a similar result.

Second, there is the worry that autonomous weapons will make tense military situations between non-belligerent nations less stable and more escalatory, prompting new outbreaks of war. I don’t know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors. They do have faster response times—cutting humans out of the loop causes actions to happen faster, enabling a quicker outbreak of violence and escalation of tactical situations. However, the flip side of this is that having humans not be present in these kinds of situations implies that outbreaks of violence will have less political sting and therefore more chance of ending with a peaceful solution. A country can always be compensated for lost machinery through diplomatic negotiations and financial concessions; the same cannot be said for lost soldiers.

Third, you might say that LAWs will prompt an arms race in AI, reducing safety. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity’s progress and expansion towards a future with exponentially growing value. Moreover, there is already substantial AI development in civilian sectors as well as non-battlefield military use, and all of these things have competitive dynamics. AGI would have such broad applications that restricting its use in one or two domains is unlikely to make a large difference; after all, economic power is the source of all military power, and international public opinion has nontrivial importance in international relations, and AI can help nations beat their competitors at both.

Moreover, no military is currently at the cutting edge of AI or machine learning (as far as we can tell). The top research is done in academia and the tech industry; militaries all over the world are just trying to adopt existing techniques for their own use, and don’t have the best talent to do so. Finally, if there is in fact a security dilemma regarding AI weaponry, then activism to stop it is unlikely to be fruitful. The literature on the utility of arms control in international relations is mixed to say the least; it seems to work only as long as the weapons are not actually vital for national security.

Finally, one could argue that the existence of LAWs makes it possible for hackers such as an unfriendly advanced AI agent to take charge of them and use them for bad ends. However, in the long run a very advanced AI system would have many tools at its disposal for capturing global resources, such as social manipulation, hacking, nanotechnology, biotechnology, building its own robots, and things which are beyond current human knowledge. A superintelligent agent would probably not be limited by human precautions; making the world as a whole less vulnerable to ASI is not a commonly suggested strategy for AI safety since we assume that once it gets onto the internet then there’s not really anything that can be done to stop it. Plus, it’s foolish to assume that an AI system with battlefield capabilities, which is just as good at general reasoning as the humans it replaced, would be vulnerable to a simple hack or takeover in a way that humans aren’t. If a machine can perform complex computations and inference regarding military rules, its duties on the battlefield, and the actions it can take, then it’s likely to have the same intrinsic resistance and skepticism about strange and apparently unlawful orders that human soldiers do. Our mental model of the LAWs of the far future should not be something like a calculator with easy-to-access buttons or a computer with a predictable response to adversarial inputs.

And in the near run, more autonomy would not necessarily make things any less secure than they are with many other technologies which we currently rely on. A fighter jet has electronics, as does a power plant. Lots of things can theoretically be hacked, and hacking an LAW to cause some damage isn’t necessarily any worse than hacking infrastructure or a manned vehicle. Just replace the GPS coordinates in a JDAM bomb package and you’ve already figured out how to use our existing equipment to deliberately cause many civilian casualties. Things like this don’t happen often, however, because military equipment is generally well hardened and difficult to access in comparison to civilian equipment.

And this brings me to a counterpoint in favor of LAWs. Military equipment is generally more robust than civilian equipment, and putting AI systems in tense situations where many ethics panels and international watchdogs are present is a great place to test their safety and reliability. Nowhere will the requirements of safety, reliability, and ethics be more stringent than in machines whose job it is to take human life. The more development and testing is conducted by militaries in this regard, the room there is for collaboration, testing and lobbying for safety and beneficial standards of ethics that can be applied to many types of AI systems elsewhere in society. We should be involved in this latter process, not a foolhardy dream of banning valuable weaponry.

edit: I forgot that disclosures are popular around here. I just started to work on a computer science research proposal for the Army Research Office. But that doesn’t affect my opinions here, which have been the same for a while.