AUKUS Military AI Trial

Introduction

This summary post covers some recent developments released regarding AUKUS in the AI sphere. They are reported in a fragmented manner online across the government websites of three countries and I thought I would bring them together here. Please note that all of this information is available online via Google—I don’t have access to any other information than that so I can’t really answer questions that you can’t answer yourselves.

AUKUS

AUKUS is an acronym which stands for Australia (A), United Kingdom (UK), and United States (US) - the three major powers collaborating on a number of major military projects intended towards the stability of the Indo-Pacific and the wider world. The trilateral security partnership includes nuclear submarines, cyber mechanisms, artificial intelligence, quantum technologies, electronic warfare, and other focuses. The UK Government describes the nuclear submarines as ‘Pillar 1’ and the other AI and cyber elements as ‘Pillar 2’ of the agreement.

AI is set to be a significant part of Pillar 2, with all three powers investing heavily in military AI. These public document disclosures have shown a focus on accelerating adoption of AI within military capability, and improving the resilience of autonomous and AI-enabled systems in contested environments, as well as mentioning defence against AI-enhanced threats. This is not surprising given how rapidly AI is being adopted into military technology (according to the research available). RAND released an in-depth report on AI and AUKUS recently examining how each of the states and third parties are attempting to develop military AI responsibly.

AI Trials

Some AUKUS AI trials have taken place in the UK, though the most recent trial took place between the Defence Science and Technology Laboratory (DSTL) and the UK’s armed forces in Australia. The intention of the trial was to identify and resolve vulnerabilities of autonomous systems in the field.

Though there is little information on what was tested and what the results were (for obvious reasons) it does point to a much larger co-operation on AI between governments in the military sphere than previous expected. There’s no real other equivalent of this in the world, yet. It is also noticeable how much of a leading role the UK are taking, despite the comparative size of the countries—though the external view may not correlate to the internal reality.

I wonder whether there was any autonomous vs autonomous simulations or trials, especially considering we are once again entering an age of major nations at war. We have had some some robot-on-robot fighting in the Ukraine War, but this is limited mostly to drone-on-drone ramming attacks and in some instances via naval warfare—but all robotics in those instances were (as far as I’m aware) human-controlled.

Impacts

The trends here display a strengthening collaboration between major military powers in terms of military AI at a scale not anticipated by contemporary literature. It is also unusually public, most likely due to the sheer amount of public spending on the project and the criticism the project has received for its cost. It is somewhat refreshing to see mention of AI Safety in there where it is loosely alluded to in terms of governance mechanisms, but those mechanisms will be largely secret from the public which makes it more difficult to analyse.

More specialist research is needed in this area to explore how this partnership will affect the wider AI Governance landscape.

Further Research

Highly impactful future AI Governance research could look at:

1. How exceptions and derogations in current AI Governance regimes could influence military AI systems

2. Compliance and governance mechanisms for military-specific AI systems

3. How AI Governance may impact Export Control regimes

No comments.