I believe by your definition, lethal autonomous weapon systems already exist and are widely in use by the US military. For example, the CIWS system will fire on targets like rapidly moving nearby ships without any human intervention.
It’s tricky because there is no clear line between “autonomous” and “not autonomous”. Is a land mine autonomous because it decides to explode without human intervention? Well, land mines could have more and more advanced heuristics slowly built into them. At what point does it become autonomous?
I’m curious what ethical norms you think should apply to a system like the CIWS, designed to autonomously engage, but within a relatively restricted area, i.e. “there’s something coming fast toward our battleship, let’s shoot it out of the air even though the algorithm doesn’t know exactly what it is and we don’t have time to get a human into the loop”.
Thank you for your comment and thanks for reading :)
The key question for us is not “what is autonomy?” — that’s bogged down the UN debates for years — but rather “what are the systemic risks of certain military AI applications, including a spectrum of autonomous capabilities?” I think many systems around today are better thought of as closer to “automated” than truly “autonomous,” as I mention in the report, but again, I think that binary distinctions like that are less salient than many people think. What we care about is the multi-dimensional problem of more and more autonomy in more and more systems, and how that can destabilize the international system.
I agree with your point that it’s a tricky definitional problem. In point 3 under the section on the “Killer Robot Ban” in the report, one of the key issues there is “The line between autonomous and automated systems is blurry.” I think you’re pointing to a key problem with how people often think about this issue.
I’m sorry I won’t be able to give a satisfying answer about “ethical norms” as it’s a bit outside the purview of the report, which focuses more on strategic stability and GCRs. (I will say that I think the idea of “human in the loop” is not the solution it’s often made out to be, given some of the issues with speed and cognitive biases discussed in the report). There are some people doing good work on related questions in international humanitarian law though that will give a much more interesting answer.
I believe by your definition, lethal autonomous weapon systems already exist and are widely in use by the US military. For example, the CIWS system will fire on targets like rapidly moving nearby ships without any human intervention.
https://en.wikipedia.org/wiki/Phalanx_CIWS
It’s tricky because there is no clear line between “autonomous” and “not autonomous”. Is a land mine autonomous because it decides to explode without human intervention? Well, land mines could have more and more advanced heuristics slowly built into them. At what point does it become autonomous?
I’m curious what ethical norms you think should apply to a system like the CIWS, designed to autonomously engage, but within a relatively restricted area, i.e. “there’s something coming fast toward our battleship, let’s shoot it out of the air even though the algorithm doesn’t know exactly what it is and we don’t have time to get a human into the loop”.
Hi Kevin,
Thank you for your comment and thanks for reading :)
The key question for us is not “what is autonomy?” — that’s bogged down the UN debates for years — but rather “what are the systemic risks of certain military AI applications, including a spectrum of autonomous capabilities?” I think many systems around today are better thought of as closer to “automated” than truly “autonomous,” as I mention in the report, but again, I think that binary distinctions like that are less salient than many people think. What we care about is the multi-dimensional problem of more and more autonomy in more and more systems, and how that can destabilize the international system.
I agree with your point that it’s a tricky definitional problem. In point 3 under the section on the “Killer Robot Ban” in the report, one of the key issues there is “The line between autonomous and automated systems is blurry.” I think you’re pointing to a key problem with how people often think about this issue.
I’m sorry I won’t be able to give a satisfying answer about “ethical norms” as it’s a bit outside the purview of the report, which focuses more on strategic stability and GCRs. (I will say that I think the idea of “human in the loop” is not the solution it’s often made out to be, given some of the issues with speed and cognitive biases discussed in the report). There are some people doing good work on related questions in international humanitarian law though that will give a much more interesting answer.
Thanks again!