New OGL and ITAR changes are shifting AI Governance and Policy below the surface: A simplified update
Note: All information in this post comes from Government press releases, public document disclosures, or news reports. These may change, so read up to date versions yourself for time-proofing. The current legal changes are in draft form for expert input, and so are particularly likely to change. I do not have any more information than is released in these linked drafts.
What happened?
As detailed in this post, the United Kingdom, Australia, and the United States are collaborating on a series of major defence projects collectively known as AUKUS. This consists primarily of a new cutting-edge nuclear submarine fleet but also includes very significant levels of investment into AI in the military sphere between the three nations. There are also lesser elements such as investment into collaborative quantum and cyber capabilities but AI remains the key foundation.
What changes?
A major change this month has been the release of draft Open General Licences by all three countries, with the UK’s coming in last but potentially the most impactful given its central positioning.
This is because The Export Control Order 2008, itself a consolidation of orders made under the Export Control Act 2002, has in its Article 26 the ability for the UK to authorise licences which make it possible to trade ‘dual-use technology’ over borders.
For those without experience in the law of restricted AI systems, an essential primer is that governments restrict the export of certain goods, technologies and information to other countries which give them an edge over others in the military or national security spheres. These also have a significant impact on development ability and speed of defence systems. A ‘dual-use’ system is a system which can have a military or a civilian purpose. For example an artillery shell has no civilian purpose so is not dual-use, but analytic software might so it is.
Though each country has its own sets of these laws, the US ‘International Traffic in Arms Regulations’ (ITAR) export controls are typically used by US allies on joint projects. This includes the UK and Australia. These rules prevent those third parties sharing US secret knowledge by passing it to other third parties.
However with these changes the UK, Australia, and the US have made it much easier to share export-restricted defence AI systems between themselves and a select few parties. At present this is the tripartite of UK, Australia, and the US, but there is significant discussion around Japan becoming involved to a greater or lesser extent. This is because much of AUKUS is focused on ‘Indo-Pacific Security’.
It also now allows the involvement of appropriately cleared third-party citizenship individuals on CONFIDENTIAL, OFFICIAL-SENSITIVE, or SECRET level aspects of systems depending on the level of overall clearance. What is interesting is that these changes are AUKUS-specific but explicitly do not affect submarines. Therefore, it is quite plain that this is very much an AI-centric change in all but name.
This is likely to result in much more attention being paid to AI and AI Governance by the defence sector than previously, though the widespread use of AI in the Russo-Ukraine War and the Israel-Hamas War was already having this effect to an extent.
So what?
Firstly, though much lip service is given to ‘increased AI Safety collaboration’ between countries, it is rare that actual measurably impactful changes occur. Many of these agreements are toothless. In comparison, Export Control regulations are the equivalent of a great white shark. That is to mean that your company doesn’t get a fine—you, individually, go to prison. For a long time.
For better or for worse, these changes to ITAR and similar export controls are fairly major in terms of near-term and long-term impact. It will greatly accelerate the development time of powerful military AI systems—particularly within the UK due to the central nature of its contribution across both pillars. It is also an attempt for some nations to ‘beat’ others in terms of AI advancement.
AI in the defence sector is already expanding at break-neck speed, but it is difficult to track how or in what direction given how opaque that industry is. AI Safety experts in this area are also extremely reluctant to share research and findings in the public sphere. So we all just have to watch for press releases from the outside and try to piece together what we think is happening.
What’s next?
Much of what happens next won’t be visible to us on the outside, but I imagine we’re going to see very heavy emphasis on AI Governance within the defence sector now. In an ideal world this would result in Academic-Defence research partnerships, and there are some of these already, but they are extremely rare and difficult to organise mostly for legal reasons.
In reality, much public influence on these governance mechanisms are likely to come from similar policy and political routes to those currently used for wider AI Safety—ie political pressure on MPs (or their USA equivalent).
What is particularly interesting are indications that the US (and perhaps the UK) may be heading in the other direction for wider, non-AUKUS AI models, introducing export controls for a wide range of dual-use or military-use AI systems. That is a topic for another post, interest depending.
It’s interesting someone voted ‘Disagree’ to this, and I would be interested in hearing why—even if that’s via inbox. Always happy to hear dissenting ideas.
Executive summary: Major changes to export control laws among the US, UK, and Australia under AUKUS have eased restrictions on sharing military AI systems between these countries, accelerating development but raising concerns over AI governance and safety.
Key points:
The US, UK, and Australia have introduced new open general licenses that allow easier sharing of dual-use (civilian and military) AI systems between them under the AUKUS defense alliance.
These changes to export control regulations, particularly in the UK, will greatly accelerate the development of powerful military AI systems among these countries.
While promising increased collaboration on AI safety, the opaque nature of the defense industry makes it difficult to track the direction and implications of these developments.
The changes signal a heavy emphasis on AI governance within the defense sector, potentially leading to increased academic-defense research partnerships and political pressure for wider AI safety measures.
In contrast, there are indications that the US (and possibly the UK) may introduce tighter export controls for non-AUKUS AI models, raising concerns about divergent approaches to AI regulation.
The post recommends closely monitoring further developments in AI governance and policy within the defense sector, given the significant implications of these changes.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.