Lethal autonomous weapons systems are an early test for AGI safety, arms race avoidance, value alignment, and governance
OK, so this makes sense and in my writeup I argued a similar thing from the point of view of software development. But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don’t want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development. What you actually suggest, contra some other advocates, is to prohibit certain classes but not others… I’m not sure if that would be helpful or harmful in this dimension. Of course it certainly would be helpful if we simply worked to ensure higher standards of safety and reliability.
I’m skeptical that this is a large concern. Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don’t know. Maybe.
Seeking to govern deeply unpopular AWSs (which also presently lack strong interest groups pushing for them) provides the easiest possible opportunity for a “win” in coordination amongst military powers.
I don’t think this is true at all. Defense companies could support AWS development, and the overriding need for national security could be a formidable force that manifests in domestic politics in a variety of ways. Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers?
Compared to other areas of military coordination among military powers, I guess AI weapons look like a relatively easy area right now but that will change in proportion to their battlefield utility.
While these concerns are not foremost from the perspective of overall expected utility, for these and other reasons we believe that delegating the decision to take a human life to machine systems is a deep moral error, and doing so in the military sets a terrible precedent.
I thought your argument here was just that we need to figure out how to implement autonomous systems in ways that best respond to these moral dilemmas, not that we need to avoid them altogether. AGI/ASI will almost certainly be making such decisions eventually, right? We better figure it out.
In my other post I had detailed responses to these issues, so let me just say briefly here that the mere presence of a dilemma in how to design and implement an AWS doesn’t count as a reason against doing it at all. Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.
Lethal autonomous weapons as WMDs
At this point, it’s been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don’t think anyone is publicly developing such drones—suggesting it’s really not so easy or useful.
A mass drone swarm terror attack would be limited by a few things. First, distances. Small drones don’t have much range. So if these are released from one or a few shipping containers, the vulnerable area will be limited. These $100 micro drones have a range of only around 100 meters. The longest range consumer drones apparently go 1-8km but cost several hundred or several thousand dollars. Of course you could do better if you optimize for range, but these slaughterbots cannot be optimized for range, they must have many other features like military payload, autonomous computing, and so on.
Covering these distances will take time. I don’t know how fast these small drones are supposed to go—is 20km/h a good guess, taking into account buildings posing obstacles to them? If so then it will take half an hour to cover a 10 kilometer radius. If these drones are going to start attacking immediately, they will make a lot of noise (from those explosive charges going off) which will alert people, and pretty soon alarm will spread on phones and social media. If they are going to loiter until the drones are dispersed, then people will see the density of drones and still be alerted. Specialized sensors or crowdsourced data might also be used to automatically detect unusual upticks in drone density and send an alert.
So if the adversary has a single dispersal point (like a shipping container) then the amount of area he can cover is fundamentally pretty limited. If he tries to use multiple dispersal points to increase area and/or shorten transit time, then logistics and timing get complicated. (Timing and proper dispersal will be especially difficult if a defensive EW threat prevents the drones from listening to operators or each other.) Either way, the attack must be in a dense urban area to maximize casualties. But few people are actually outside at any given time. Most are either in a building, in a car or public transport, even during rush hour or lunch break. And for every person who gets killed by these drones, there will be many other people watching safely through car or building windows who can see what is going on and alert other people. So people’s vulnerability will be pretty limited. If the adversary decides to bring large drones to demolish barriers then it will be a much more expensive and complex operation. Plus, people only have to wait a little while until the drones run out of energy. The event will be over in minutes, probably.
If we imagine that drone swarms are a sufficiently large threat that people prepare ahead of time, then it gets still harder to inflict casualties. Sidewalks could have light coverings (also good for shade and insulation), people could carry helmets, umbrellas, or cricket bats, but most of all people would just spend more time indoors. It’s not realistic to expect this in an ordinary peacetime scenario but people will be quite adept at doing this during military bombardment.
Also, there are options for hard countermeasures which don’t use technology that is more complicated than that which is entailed by these slaughterbots. Fixtures in crowded areas could shoot anti-drone munitions (which could be less lethal against humans) or launch defensive drones to disable the attackers.
Now, obviously this could all change as drones get better. But defensive measures including defensive drones could improve at the same time.
I should also note that the idea of delivering a cheap deadly payload like toxins or a dirty bomb via shipping container has been around for a while yet no one has carried out.
The unfortunate flip-side of these differences, however, is that anti-personnel lethal AWSs are much more likely to be used. In terms of “bad actors,” along with the advantages of being safe to transport and hard to detect, the ability to selectively attack particular types of people who have been identified as worthy of killing will help assuage the moral qualms that might otherwise discourage mass killing.
I don’t think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise. After all the primary considerations in going to war are matters of national interest, not morality. If there is such a moral hazard effect then it is small and outweighed by the first-order reduction in harm.
Autonomous WMDs would pose all of the same sorts of threats that other ones do,[12]
Just because drones can deploy WMDs doesn’t mean they are anything special—you could can also combine chem/bio/nuke weapons with tactical ballistic missiles, with hypersonics, with torpedoes, with bombers, etc.
Lethal autonomous weapons as destabilizing elements in and out of war
I stand by the point in my previous post that it is a mistake to conflate a lower threshold for conflict with a higher (severity-weighted) expectation of conflict, and military incidents will be less likely to escalate (ceteris paribus) if fewer humans are in the initial losses.
Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.
A large-scale nuclear war is unbelievably costly: it would most likely kill 1-7Bn in the first year and wipe out a large fraction of Earth’s economic activity (i.e. of order one quadrillion USD or more, a decade worth of world GDP.)Some current estimates of the likelihood of global-power nuclear war over the next few decades range from ~0.5-20%. So just a 10% increase in this probability, due to an increase in the probability of conflict that leads to nuclear war, costs in expectation ~500K − 150m lives and ~$0.1-10Tn (not counting huge downstream life-loss and economic losses).
The mean expectations are closer to the lower ends of these ranges.
Currently, 87,000 people die in state-based conflicts per year. If automation cuts this by 25% then in three decades it will add up to 650k lives saved. That’s still outweighed if the change in probability is 10%, but for reasons described previously I think 10% is too pessimistic.
The third is simply that this is “somebody else’s problem,” and low-impact relative to other issues to which effort and resources could be devoted.[21] We’ve argued above against all three positions: the expected utility of widespread autonomous weapons is likely to be highly negative (due to increase probability of large-scale war, if nothing else), the issue is addressable (with multiple examples of past successful arms-control agreements), currently tractable if difficult, and success would also improve the probability of positive results in even more high-stakes arenas including global AGI governance.
As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries.
We leave out disingenuous arguments against straw men such as “But if we give up lethal autonomous weapons and allow others to develop them, we lose the war.” No one serious, to our knowledge, is advocating this – the whole point of multilateral arms control agreements is that all parties are subject to them.
If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that’s basically what you’re doing.
I would like to again mention the Ottawa Treaty, I don’t know much about it, but it seems like a rich subject to explore for lessons that can be applied to AWS regulation.
Thanks for your replies here, and for your earlier longer posts that were helpful in understanding the skeptical side of the argument, even if I only saw them after writing my piece. As replies to some of your points above:
But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don’t want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development
It is unclear to me what you suggest we would be “sacrificing” if militaries did not have the legal opportunity to use lethal AWS. The opportunity I see is to make decisions, in a globally coordinated way and amongst potentially adversarial powers, about acceptable and unacceptable delegations of human decisions to machines, and enforcing those decisions. I can’t see how success in doing so would sacrifice the opportunity. Moreover, a ban on all autonomous weapons (including purely defensive nonlethal ones) is very unlikely and not really what anyone is calling for, so there will be plenty of opportunity to “practice” on non-lethal AWs, defenses against AWs, etc., on the technical front; there will also be other opportunities to “practice” on what life-and-death decisions should. and should not be delegated, for example in judicial review.
Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don’t know. Maybe
Though I understand why you have drawn a connection to the Ottawa Treaty because of its treatment on landmines, I believe this is the wrong analogy for AWSs. I believe the Biological Weapons Convention is more apt, and I think the answer would be “yes,” we have learned something about international governance and coordination for dangerous technology from the BWC. I also believe that the agreement not to use landmines is a global good.
Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers?.
I am not sure why you are confident it would be easier to reach binding agreements on these suggested matters. To the extent that it is possible, it may suggest that there is little value to be gained. What is generally missing from these is that there is little popular or political will to create an international agreement on e.g. internet connectivity. It’s not as high stakes or consequential as lethal AWSs, and to first approximation, nobody cares. The point is to show agreement can be reached in an arena that is consequential for militaries, and this is our best opportunity to do so.
Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.
There are a lot of important and difficult moral questions worth a long discussion, as well as more practical questions of whether systems and chains-of-command are in fact created in a way that responsibility rests somewhere rather than nowhere. I’ve got my own beliefs on those, which may or may not be shared, but I actually don’t think we need to address them to judge the importance of limitations on autonomous weapons. I don’t necessarily agree that the burden is on me, though: it’s certainly both legally (and I believe ethically) “your” responsibility, if you are creating a new system for killing people, you show that it is consistent with international law, for example.
At this point, it’s been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don’t think anyone is publicly developing such drones—suggesting it’s really not so easy or useful
At the time of release, Slaughterbots was meant to be speculative and to raise awareness of the prospect of risk. AGI and a full scale nuclear war haven’t happened either—that doesn’t make the risk not real. Would you lodge the same complaint against “The Day After”? Regardless, as to whether people are developing such drones, I suggest you review information in a report called “Slippery Slope” by PAX on such systems, especially about the Kargu drones from Turkey. I think you will decide that it is relatively “easy” and “useful” to develop lethal AWSs.
Responding to paragraphs starting with “A mass drone swarm terror attack…” through the paragraph starting with “Now, obviously this could…” Your analysis here is highly speculative and presupposes a particular pattern in the development of offensive and defensive capabilities of lethal AWSs. I welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost—both in countermeasures and in psychological costs—that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.
Finally, an order of hundreds of thousands of drones, designed as fully autonomous killing machines, is quite industrially significant. It’s just not something that a nonstate actor can pull off. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is not realistic.
I believe we agree that in terms of serious (like 1000+ casualties) WMDs, the far greater risk is smaller state actors producing or buying them, not a rogue terror organization. As a reminder, it won’t (just) be the military making these weapons, but weapons makers who can then sell them (e.g., look at the export of drones by China and Turkey throughout many high-conflict regions). Further, once produced or sold to a state actor, weapons can and do then come into the possession of rogue actors, including WMDs. Look no further than the history of the Nunn-Lugar Cooperative Threat Reduction program for real and close-calls, the transfer of weapons from Syria to Hezbollah, etc.
I don’t think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise.
It may or may not be the case; as you indicate it’s mixed in with a lot of factors. But precision (and lack of infrastructure destruction) are actually not the only, or even primary reasons I expect AWs will lead to wider conflict, depending on the context. In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs). At least in terms of violence (if not, to date, war), the latter seems to make a large difference, as exhibited by the US (manned) drone program for example.
The mean expectations are closer to the lower ends of these ranges.
I’m not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.
The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don’t see how the numbers can balance at all when including large-scale wars.
Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.
I welcome your investigation. I agree that speculation on the impacts of new military tech has not been great (along all spectrums), which is why precaution is a wise course of action.
As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries.
I agree that other emerging technologies (including some you don’t mention, like synthetic bioweapons), deserve greater attention. But that doesn’t mean lethal AWSs should be ignored.
If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that’s basically what you’re doing.
This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it. Moreover, if something is ethically wrong, we should be willing to not do it even if others do it — but far, far better to enter into an agreement so that they don’t.
welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost—both in countermeasures and in psychological costs—that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.
I’m saying there are substantial constraints on using cheap drones to attack civilians en masse, some of them are more-or-less-costly preparation measures and some of them are not. Even without defensive preparation, I just don’t see these things as being so destructive.
If we imagine offensive capability development then we should also imagine defensive capability development.
What other AWSs are we talking about if not drones?
In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs).
Hmm. Have there been any unclaimed drone attacks so far, and would that change with autonomy? Moreover, if such ambiguity does arise, would that not also mitigate the risk of immediate retaliation and escalation? My sense here is that there are conflicting lines of reasoning going on here. How can AWSs increase the risks of dangerous escalation, but also be perceived as safe and risk-free by users?
I’m not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.
I mean, we’re uncertain about the 1-7Bn figure and uncertain about the 0.5-20% figure. When you multiply them together the low x low is implausibly low and the high x high is implausibly high. But the mean x mean would be closer to the lower end. So if the means are 4Bn and 10% then the product is 40M which is closer to the lower end of your 0.5-150M range. Yes I realize this makes little difference (assuming your 1-7Bn and 0.5-0.20% estimates are normal distributions). It does seem apparent to me now that the escalation-to-nuclear-warfare risk is much more important than some of these direct impacts.
The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don’t see how the numbers can balance at all when including large-scale wars.
I think they’d probably save lives in a large-scale war for the same reasons. You say that they wouldn’t save lives in a total nuclear war, that makes sense if civilians are attacked just as severely as soldiers. But large-scale wars may not be like this. Even nuclear wars may not involve major attacks on cities (but yes I realize that the EV is greater for those that do).
This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it.
I suppose that’s fine, I was thinking more about concretely telling people not to do it, before any such agreement.
You also have to be in principle willing to do something if you want to credibly threaten the other party and convince them not to do it.
Moreover, if something is ethically wrong, we should be willing to not do it even if others do it
Well there are some cases where a problematic weapon is so problematic that we should unilaterally forsake it even if we can’t get an agreement. But there are also some cases where it’s just problematic enough that a treaty would be a good thing, but unilaterally forsaking it would do net harm by degrading our relative military position. (Of course this depends on who the audience is, but this discourse over AWSs seems to primarily take place in the US and some other liberal democracies.)
OK, so this makes sense and in my writeup I argued a similar thing from the point of view of software development. But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don’t want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development. What you actually suggest, contra some other advocates, is to prohibit certain classes but not others… I’m not sure if that would be helpful or harmful in this dimension. Of course it certainly would be helpful if we simply worked to ensure higher standards of safety and reliability.
I’m skeptical that this is a large concern. Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don’t know. Maybe.
I don’t think this is true at all. Defense companies could support AWS development, and the overriding need for national security could be a formidable force that manifests in domestic politics in a variety of ways. Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers?
Compared to other areas of military coordination among military powers, I guess AI weapons look like a relatively easy area right now but that will change in proportion to their battlefield utility.
I thought your argument here was just that we need to figure out how to implement autonomous systems in ways that best respond to these moral dilemmas, not that we need to avoid them altogether. AGI/ASI will almost certainly be making such decisions eventually, right? We better figure it out.
In my other post I had detailed responses to these issues, so let me just say briefly here that the mere presence of a dilemma in how to design and implement an AWS doesn’t count as a reason against doing it at all. Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.
At this point, it’s been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don’t think anyone is publicly developing such drones—suggesting it’s really not so easy or useful.
A mass drone swarm terror attack would be limited by a few things. First, distances. Small drones don’t have much range. So if these are released from one or a few shipping containers, the vulnerable area will be limited. These $100 micro drones have a range of only around 100 meters. The longest range consumer drones apparently go 1-8km but cost several hundred or several thousand dollars. Of course you could do better if you optimize for range, but these slaughterbots cannot be optimized for range, they must have many other features like military payload, autonomous computing, and so on.
Covering these distances will take time. I don’t know how fast these small drones are supposed to go—is 20km/h a good guess, taking into account buildings posing obstacles to them? If so then it will take half an hour to cover a 10 kilometer radius. If these drones are going to start attacking immediately, they will make a lot of noise (from those explosive charges going off) which will alert people, and pretty soon alarm will spread on phones and social media. If they are going to loiter until the drones are dispersed, then people will see the density of drones and still be alerted. Specialized sensors or crowdsourced data might also be used to automatically detect unusual upticks in drone density and send an alert.
So if the adversary has a single dispersal point (like a shipping container) then the amount of area he can cover is fundamentally pretty limited. If he tries to use multiple dispersal points to increase area and/or shorten transit time, then logistics and timing get complicated. (Timing and proper dispersal will be especially difficult if a defensive EW threat prevents the drones from listening to operators or each other.) Either way, the attack must be in a dense urban area to maximize casualties. But few people are actually outside at any given time. Most are either in a building, in a car or public transport, even during rush hour or lunch break. And for every person who gets killed by these drones, there will be many other people watching safely through car or building windows who can see what is going on and alert other people. So people’s vulnerability will be pretty limited. If the adversary decides to bring large drones to demolish barriers then it will be a much more expensive and complex operation. Plus, people only have to wait a little while until the drones run out of energy. The event will be over in minutes, probably.
If we imagine that drone swarms are a sufficiently large threat that people prepare ahead of time, then it gets still harder to inflict casualties. Sidewalks could have light coverings (also good for shade and insulation), people could carry helmets, umbrellas, or cricket bats, but most of all people would just spend more time indoors. It’s not realistic to expect this in an ordinary peacetime scenario but people will be quite adept at doing this during military bombardment.
Also, there are options for hard countermeasures which don’t use technology that is more complicated than that which is entailed by these slaughterbots. Fixtures in crowded areas could shoot anti-drone munitions (which could be less lethal against humans) or launch defensive drones to disable the attackers.
Now, obviously this could all change as drones get better. But defensive measures including defensive drones could improve at the same time.
I should also note that the idea of delivering a cheap deadly payload like toxins or a dirty bomb via shipping container has been around for a while yet no one has carried out.
Finally, an order of hundreds of thousands of drones, designed as fully autonomous killing machines, is quite industrially significant. It’s just not something that a nonstate actor can pull off. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is not realistic.
I don’t think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise. After all the primary considerations in going to war are matters of national interest, not morality. If there is such a moral hazard effect then it is small and outweighed by the first-order reduction in harm.
Just because drones can deploy WMDs doesn’t mean they are anything special—you could can also combine chem/bio/nuke weapons with tactical ballistic missiles, with hypersonics, with torpedoes, with bombers, etc.
I stand by the point in my previous post that it is a mistake to conflate a lower threshold for conflict with a higher (severity-weighted) expectation of conflict, and military incidents will be less likely to escalate (ceteris paribus) if fewer humans are in the initial losses.
Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.
The mean expectations are closer to the lower ends of these ranges.
Currently, 87,000 people die in state-based conflicts per year. If automation cuts this by 25% then in three decades it will add up to 650k lives saved. That’s still outweighed if the change in probability is 10%, but for reasons described previously I think 10% is too pessimistic.
As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries.
If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that’s basically what you’re doing.
I would like to again mention the Ottawa Treaty, I don’t know much about it, but it seems like a rich subject to explore for lessons that can be applied to AWS regulation.
Thanks for your replies here, and for your earlier longer posts that were helpful in understanding the skeptical side of the argument, even if I only saw them after writing my piece. As replies to some of your points above:
It is unclear to me what you suggest we would be “sacrificing” if militaries did not have the legal opportunity to use lethal AWS. The opportunity I see is to make decisions, in a globally coordinated way and amongst potentially adversarial powers, about acceptable and unacceptable delegations of human decisions to machines, and enforcing those decisions. I can’t see how success in doing so would sacrifice the opportunity. Moreover, a ban on all autonomous weapons (including purely defensive nonlethal ones) is very unlikely and not really what anyone is calling for, so there will be plenty of opportunity to “practice” on non-lethal AWs, defenses against AWs, etc., on the technical front; there will also be other opportunities to “practice” on what life-and-death decisions should. and should not be delegated, for example in judicial review.
Though I understand why you have drawn a connection to the Ottawa Treaty because of its treatment on landmines, I believe this is the wrong analogy for AWSs. I believe the Biological Weapons Convention is more apt, and I think the answer would be “yes,” we have learned something about international governance and coordination for dangerous technology from the BWC. I also believe that the agreement not to use landmines is a global good.
I am not sure why you are confident it would be easier to reach binding agreements on these suggested matters. To the extent that it is possible, it may suggest that there is little value to be gained. What is generally missing from these is that there is little popular or political will to create an international agreement on e.g. internet connectivity. It’s not as high stakes or consequential as lethal AWSs, and to first approximation, nobody cares. The point is to show agreement can be reached in an arena that is consequential for militaries, and this is our best opportunity to do so.
There are a lot of important and difficult moral questions worth a long discussion, as well as more practical questions of whether systems and chains-of-command are in fact created in a way that responsibility rests somewhere rather than nowhere. I’ve got my own beliefs on those, which may or may not be shared, but I actually don’t think we need to address them to judge the importance of limitations on autonomous weapons. I don’t necessarily agree that the burden is on me, though: it’s certainly both legally (and I believe ethically) “your” responsibility, if you are creating a new system for killing people, you show that it is consistent with international law, for example.
At the time of release, Slaughterbots was meant to be speculative and to raise awareness of the prospect of risk. AGI and a full scale nuclear war haven’t happened either—that doesn’t make the risk not real. Would you lodge the same complaint against “The Day After”? Regardless, as to whether people are developing such drones, I suggest you review information in a report called “Slippery Slope” by PAX on such systems, especially about the Kargu drones from Turkey. I think you will decide that it is relatively “easy” and “useful” to develop lethal AWSs.
Responding to paragraphs starting with “A mass drone swarm terror attack…” through the paragraph starting with “Now, obviously this could…” Your analysis here is highly speculative and presupposes a particular pattern in the development of offensive and defensive capabilities of lethal AWSs. I welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost—both in countermeasures and in psychological costs—that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.
I believe we agree that in terms of serious (like 1000+ casualties) WMDs, the far greater risk is smaller state actors producing or buying them, not a rogue terror organization. As a reminder, it won’t (just) be the military making these weapons, but weapons makers who can then sell them (e.g., look at the export of drones by China and Turkey throughout many high-conflict regions). Further, once produced or sold to a state actor, weapons can and do then come into the possession of rogue actors, including WMDs. Look no further than the history of the Nunn-Lugar Cooperative Threat Reduction program for real and close-calls, the transfer of weapons from Syria to Hezbollah, etc.
It may or may not be the case; as you indicate it’s mixed in with a lot of factors. But precision (and lack of infrastructure destruction) are actually not the only, or even primary reasons I expect AWs will lead to wider conflict, depending on the context. In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs). At least in terms of violence (if not, to date, war), the latter seems to make a large difference, as exhibited by the US (manned) drone program for example.
I’m not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.
The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don’t see how the numbers can balance at all when including large-scale wars.
I welcome your investigation. I agree that speculation on the impacts of new military tech has not been great (along all spectrums), which is why precaution is a wise course of action.
I agree that other emerging technologies (including some you don’t mention, like synthetic bioweapons), deserve greater attention. But that doesn’t mean lethal AWSs should be ignored.
This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it. Moreover, if something is ethically wrong, we should be willing to not do it even if others do it — but far, far better to enter into an agreement so that they don’t.
I’m saying there are substantial constraints on using cheap drones to attack civilians en masse, some of them are more-or-less-costly preparation measures and some of them are not. Even without defensive preparation, I just don’t see these things as being so destructive.
If we imagine offensive capability development then we should also imagine defensive capability development.
What other AWSs are we talking about if not drones?
Hmm. Have there been any unclaimed drone attacks so far, and would that change with autonomy? Moreover, if such ambiguity does arise, would that not also mitigate the risk of immediate retaliation and escalation? My sense here is that there are conflicting lines of reasoning going on here. How can AWSs increase the risks of dangerous escalation, but also be perceived as safe and risk-free by users?
I mean, we’re uncertain about the 1-7Bn figure and uncertain about the 0.5-20% figure. When you multiply them together the low x low is implausibly low and the high x high is implausibly high. But the mean x mean would be closer to the lower end. So if the means are 4Bn and 10% then the product is 40M which is closer to the lower end of your 0.5-150M range. Yes I realize this makes little difference (assuming your 1-7Bn and 0.5-0.20% estimates are normal distributions). It does seem apparent to me now that the escalation-to-nuclear-warfare risk is much more important than some of these direct impacts.
I think they’d probably save lives in a large-scale war for the same reasons. You say that they wouldn’t save lives in a total nuclear war, that makes sense if civilians are attacked just as severely as soldiers. But large-scale wars may not be like this. Even nuclear wars may not involve major attacks on cities (but yes I realize that the EV is greater for those that do).
I suppose that’s fine, I was thinking more about concretely telling people not to do it, before any such agreement.
You also have to be in principle willing to do something if you want to credibly threaten the other party and convince them not to do it.
Well there are some cases where a problematic weapon is so problematic that we should unilaterally forsake it even if we can’t get an agreement. But there are also some cases where it’s just problematic enough that a treaty would be a good thing, but unilaterally forsaking it would do net harm by degrading our relative military position. (Of course this depends on who the audience is, but this discourse over AWSs seems to primarily take place in the US and some other liberal democracies.)