For nuclear risk reduction, I believe AI/ML and the growing availability of big data are potentially game changing. In a world where the vast majority of collective human actions and behaviors leave a digital footprint, we now have a set of new tools for detecting illicit nuclear behavior (like illicit trade in nuclear dual use components, or the production of weapons grade uranium or plutonium). Like any new technology, we have to work to capture the benefits while minimizing the risks they bring, so we at NTI are looking at both sides.
Last year NTI published a report on a pilot project we undertook with a partner, C4ADS, that demonstrated how we were able to use some large trade datasets and machine learning to detect illicit nuclear trade. After we published our work, a number of the entities we found were added to government sanctions lists. We believe this work can be significantly expanded to build and demonstrate an entirely new approach to a verification system– one that enables a safer nuclear strategy for the future.
In April, the National Academies published an Interim Report, (led by Jill Hruby) on their ongoing exploration of Nuclear Proliferation and Arms Control Monitoring, Detection, and Verification. The Interim Report finds that technological advances provide unprecedented opportunities for staying ahead of complex and expanding monitoring, detection, and verification challenges; and that monitoring, detection, and verification technology should be a higher national security priority with a long-term vision and regular evaluation of progress. Technological change is creating transformational possibilities for nuclear threat reduction and we should work diligently to develop national and global innovation ecosystems that can deliver on this promise.
On the flip side, AI/ML and automated decision-making also pose new challenges and risks for the nuclear system. In August, NTI published a report titled Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. This report, by Jill Hruby, then Sam Nunn Distinguished Fellow (and now Administrator of the National Nuclear Security Administration) and former NTI intern and MIT doctoral student M. Nina Miller explores the possible applications of AI to nuclear-weapons systems and assesses the benefits, risks, and strategic stability implications. The report recommends: 1) research on low technical-risk approaches and fail-safe protocols for AI use in high-consequence applications; 2) that states with nuclear weapons make declaratory statements about the role of human operators in nuclear-weapon systems and/or the prohibition or limits of AI use in their nuclear-weapon systems; and 3) increased international dialog on the implications of AI use in nuclear-weapon systems, including how AI could affect strategic and crisis stability, and explore areas where international cooperation or development of international norms, standards, limitations, or bans could be beneficial.
On the biosecurity side, we’re really excited about our work on a prototype to globally expand DNA synthesis screening practices and to prevent exploitation by malicious actors and misuse.
NTI | bio, with the support of talented technical consultants and in partnership with the World Economic Forum, has developed several critical elements of the Common Mechanism prototype, which improves on current industry best practices. The Common Mechanism prototype includes: (1) novel databases of ‘benign’ and ‘biorisk’ sequences informed by publicly available literature and industry expertise and inputs and (2) and a screening algorithm to compare incoming orders with the contents of the aforementioned databases. A key feature of the algorithm is that it uses statistical models known as ‘profile hidden Markov models,’ which have been shown through early tests to be resilient against subversion attempts.
As a next step, the project team will construct a “decision support system” which will present the results of the screening algorithm, while providing important context about the kind of organism the sequence comes from, the function it might have, and a recommendation regarding whether to proceed with the order. These tools will allow DNA synthesis providers to perform an effective and efficient assessment as to whether the DNA synthesis order in question is benign or constitutes a risk.
Future steps include acquiring additional data sets to test the Common Mechanism prototype, improving the quality of the reference databases, and rigorously testing the Common Mechanism with close partners and a wider circle of other peer reviewers.
For nuclear risk reduction, I believe AI/ML and the growing availability of big data are potentially game changing. In a world where the vast majority of collective human actions and behaviors leave a digital footprint, we now have a set of new tools for detecting illicit nuclear behavior (like illicit trade in nuclear dual use components, or the production of weapons grade uranium or plutonium). Like any new technology, we have to work to capture the benefits while minimizing the risks they bring, so we at NTI are looking at both sides.
Last year NTI published a report on a pilot project we undertook with a partner, C4ADS, that demonstrated how we were able to use some large trade datasets and machine learning to detect illicit nuclear trade. After we published our work, a number of the entities we found were added to government sanctions lists. We believe this work can be significantly expanded to build and demonstrate an entirely new approach to a verification system– one that enables a safer nuclear strategy for the future.
In April, the National Academies published an Interim Report, (led by Jill Hruby) on their ongoing exploration of Nuclear Proliferation and Arms Control Monitoring, Detection, and Verification. The Interim Report finds that technological advances provide unprecedented opportunities for staying ahead of complex and expanding monitoring, detection, and verification challenges; and that monitoring, detection, and verification technology should be a higher national security priority with a long-term vision and regular evaluation of progress. Technological change is creating transformational possibilities for nuclear threat reduction and we should work diligently to develop national and global innovation ecosystems that can deliver on this promise.
On the flip side, AI/ML and automated decision-making also pose new challenges and risks for the nuclear system. In August, NTI published a report titled Assessing and Managing the Benefits and Risks of Artificial Intelligence in Nuclear-Weapon Systems. This report, by Jill Hruby, then Sam Nunn Distinguished Fellow (and now Administrator of the National Nuclear Security Administration) and former NTI intern and MIT doctoral student M. Nina Miller explores the possible applications of AI to nuclear-weapons systems and assesses the benefits, risks, and strategic stability implications. The report recommends: 1) research on low technical-risk approaches and fail-safe protocols for AI use in high-consequence applications; 2) that states with nuclear weapons make declaratory statements about the role of human operators in nuclear-weapon systems and/or the prohibition or limits of AI use in their nuclear-weapon systems; and 3) increased international dialog on the implications of AI use in nuclear-weapon systems, including how AI could affect strategic and crisis stability, and explore areas where international cooperation or development of international norms, standards, limitations, or bans could be beneficial.
On the biosecurity side, we’re really excited about our work on a prototype to globally expand DNA synthesis screening practices and to prevent exploitation by malicious actors and misuse.
NTI | bio, with the support of talented technical consultants and in partnership with the World Economic Forum, has developed several critical elements of the Common Mechanism prototype, which improves on current industry best practices. The Common Mechanism prototype includes: (1) novel databases of ‘benign’ and ‘biorisk’ sequences informed by publicly available literature and industry expertise and inputs and (2) and a screening algorithm to compare incoming orders with the contents of the aforementioned databases. A key feature of the algorithm is that it uses statistical models known as ‘profile hidden Markov models,’ which have been shown through early tests to be resilient against subversion attempts.
As a next step, the project team will construct a “decision support system” which will present the results of the screening algorithm, while providing important context about the kind of organism the sequence comes from, the function it might have, and a recommendation regarding whether to proceed with the order. These tools will allow DNA synthesis providers to perform an effective and efficient assessment as to whether the DNA synthesis order in question is benign or constitutes a risk.
Future steps include acquiring additional data sets to test the Common Mechanism prototype, improving the quality of the reference databases, and rigorously testing the Common Mechanism with close partners and a wider circle of other peer reviewers.