FYI if you dig into AI researchers attitudes in surveys, they hate lethal autonomous weapons and really don’t want to work on them. Will dig up reports, but for now check out: https://futureoflife.org/laws-pledge/
Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about “US battlefield” and “global battlefield” but the example/specific applications surveyed are:
U.S. Battlefield—As part of a larger initiative to assist U.S. combat efforts, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.
Global Battlefield—As part of a larger initiative with U.S. allies to enhance global security, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.
So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.
It’s also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.
FYI if you dig into AI researchers attitudes in surveys, they hate lethal autonomous weapons and really don’t want to work on them. Will dig up reports, but for now check out: https://futureoflife.org/laws-pledge/
Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about “US battlefield” and “global battlefield” but the example/specific applications surveyed are:
So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.
It’s also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.