Thanks for making the case, I think this is written well and will make it easy to concretely disagree for more sceptical readers than me. I get away most convinced that this looks like a great opportunity to flesh out international cooperation infrastructure on AI. I expect rapid increases in AI capabilities in the next decades, capabilities that will go far beyond AWS and require a ton of good people having difficult conversations on the international stage.
One question I had when I read about “drawing a line”: I wonder if pushing for such a strong stance will make it harder to agree on it as I suppose there is currently a lot of investment going on. And even if countries sign the agreement, maybe others will have little trust in other countries following it, because it spontaneously it seems relatively easier to secretely work on this (compared to chemical and nuclear weapons).
Lastly, through Gwern’s Twitter I found a thread on a study which found that AI researchers are much more positive about working for the Department of Defense than one would think if one follows the public discussions around working for them.
FYI if you dig into AI researchers attitudes in surveys, they hate lethal autonomous weapons and really don’t want to work on them. Will dig up reports, but for now check out: https://futureoflife.org/laws-pledge/
Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about “US battlefield” and “global battlefield” but the example/specific applications surveyed are:
U.S. Battlefield—As part of a larger initiative to assist U.S. combat efforts, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.
Global Battlefield—As part of a larger initiative with U.S. allies to enhance global security, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.
So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.
It’s also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.
Thanks for making the case, I think this is written well and will make it easy to concretely disagree for more sceptical readers than me. I get away most convinced that this looks like a great opportunity to flesh out international cooperation infrastructure on AI. I expect rapid increases in AI capabilities in the next decades, capabilities that will go far beyond AWS and require a ton of good people having difficult conversations on the international stage.
One question I had when I read about “drawing a line”: I wonder if pushing for such a strong stance will make it harder to agree on it as I suppose there is currently a lot of investment going on. And even if countries sign the agreement, maybe others will have little trust in other countries following it, because it spontaneously it seems relatively easier to secretely work on this (compared to chemical and nuclear weapons).
Lastly, through Gwern’s Twitter I found a thread on a study which found that AI researchers are much more positive about working for the Department of Defense than one would think if one follows the public discussions around working for them.
FYI if you dig into AI researchers attitudes in surveys, they hate lethal autonomous weapons and really don’t want to work on them. Will dig up reports, but for now check out: https://futureoflife.org/laws-pledge/
Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about “US battlefield” and “global battlefield” but the example/specific applications surveyed are:
So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.
It’s also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.