Professor of Physics at UCSC, and co-founder of the Future of Life Institute, Metaculus, and the Foundational Questions Institute
aaguirre
No that was just a super rough estimate: world GPD of ~100 Tn, so 1 decade’s worth is ~1 Qd, and I’m guessing a global nuclear war would wipe out a significant fraction of that.
My intuition has been that at least in the medium term unless AWs are self-replicating they’d cause GCR risk primarily through escalation to nuclear war; but if there are other scenarios, that would be interesting to know (by PM if you’re worried about info. hazards.)
The problem is I was not logged in on that browser. It asked me to log in to post the comment, and after I did so the comment was gone.
Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about “US battlefield” and “global battlefield” but the example/specific applications surveyed are:
U.S. Battlefield—As part of a larger initiative to assist U.S. combat efforts, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.
Global Battlefield—As part of a larger initiative with U.S. allies to enhance global security, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.
So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.
It’s also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.
Thanks for pointing these out. Very frustratingly, I just wrote out a lengthy response (to the first of the linked posts) that this platform lost when I tried to post it. I won’t try to reconstruct that but will just note for now that the conclusions and emphases are quite different, probably most in terms of:
Our greater emphasis on the WMD angle and qualitatively different dynamics in future AWs
Our greater emphasis on potential escalation into great-powers wars
While agreeing that international agreement (rather than unilateral eschewing) is the goal, we believe that stigmatization is a necessary precursor to such an agreement.
I think I was on Brave browser, which may store less locally, so it’s possible that was a contributor.