Thoughts on yesterday’s UN Security Council meeting on AI

Firstly, it’s encouraging that AI is being discussed as a threat at the highest global body dedicated to ensuring global peace and security. This seemed like a remote possibility just 4 months ago.

However, throughout the meeting, (possibly near term) extinction risk from uncontrollable superintelligent AI was the elephant in the room. ~1% air time, when it needs to be ~99%, given the venue and its power to stop it. Let’s hope future meetings improve on this. Ultimately we need the UNSC to put together a global non-proliferation treaty on AGI, if we are to stand a reasonable chance of making it out of this decade alive.

There was plenty of mention of using AI for peacekeeping. However, this seems naive in light of the offence-defence asymmetry facilitated by generative AI (especially when it comes to threats like bio-terror/​engineered pandemics, and cybercrime/​warfare). And in the limit of outsourcing intelligence gathering and strategy recommendations to AI (whist still keeping a human in the loop), you get scenarios like this.

Highlights:

China mentioned Pause: “The international community needs to… ensure that risks beyond human control don’t occur… We need to strengthen the detection and evaluation of the entire lifecycle of AI, ensuring that mankind has the ability to press the pause button at critical moments”. (Zhang Jun, representing China at the UN Security Council meeting on AI))

Mozambique mentioned the Sorcerer’s Apprentice, human loss of control, recursive self-improvement, accidents, catastrophic and existential risk: “In the event that credible evidence emerges indicating that AI poses and existential risk, it’s crucial to negotiate an intergovernmental treaty to govern and monitor its use.” (MANUEL GONÇALVES, Deputy Minister for Foreign Affairs of Mozambique, at the UN Security Council meeting on AI)

(A bunch of us protesting about this outside the UK Foreign Office last week.)

(PauseAI’s comments on the meeting on Twitter.)

(Discussion with Jack Clark on Twitter re his lack of mention of x-risk. Note that the post war atomic settlement—Baruch Plan—would probably have been quite different if the first nuclear detonation was assessed to have a significant chance of igniting the entire atmosphere!)

(My Tweet version of this post. I’m Tweeting more as I think it’s time for mass public engagement on AGI x-risk.)