Professor of Physics at UCSC, and co-founder of the Future of Life Institute, Metaculus, and the Foundational Questions Institute
aaguirre
Karma: 262
Thanks for pointing these out. Very frustratingly, I just wrote out a lengthy response (to the first of the linked posts) that this platform lost when I tried to post it. I won’t try to reconstruct that but will just note for now that the conclusions and emphases are quite different, probably most in terms of:
Our greater emphasis on the WMD angle and qualitatively different dynamics in future AWs
Our greater emphasis on potential escalation into great-powers wars
While agreeing that international agreement (rather than unilateral eschewing) is the goal, we believe that stigmatization is a necessary precursor to such an agreement.
Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about “US battlefield” and “global battlefield” but the example/specific applications surveyed are:
So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.
It’s also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.