Modeling AI competition dynamics for better governance

TLDR:

The Survival and Flourishing Fund has committed a matched grant. Donations are matched 1:1 up to $26,000. We need to raise at least $10,000 to unlock any of it. The full match funds the team for seven months.

If you’ve been looking for a way to support quantitative AI governance work specifically, this is a good moment. Donate here, and feel free to reach out if you’re on the fence!

What we do

Modeling Cooperation builds quantitative research tools and software to help AI governance researchers and decision-makers understand the strategic dynamics of AI competition… specifically, the collective action problems that make cutting corners on safety individually rational even when it’s collectively catastrophic.

Our main project is the software platform behind the Intelligence Rising workshops, run by Dr. Shahar Avin (CSER, Cambridge & UK AISI) and his team at Technology Strategy Roleplay. These workshops use a structured simulation game to help senior figures in government, industry and academia develop an intuitive grasp of how AI competition unfolds. Our software makes those sessions easier to run and possible to scale.

We also collaborate with Professor Robert Trager (co-director of AIGI, Oxford) on tools including the Safety-Performance Tradeoff web app, which demonstrates formally how technical safety progress is not always sufficient to reduce risk absent effective governance.

Our research includes work on the Windfall Clause and, forthcoming, an analysis of how early warnings about AI risks can be made credible.

Why this work is different

Most AI governance work is qualitative. Most technical AI safety work focuses on alignment or interpretability. We’re doing something unique: building scalable software to help decision-makers experience the competition dynamics that compromise the safety of frontier AI systems.

We want to be honest about what we don’t know. While some research has shown that participants trained with simulated interaction were more accurate than game theorists in forecasting decisions in conflicts (see Green and Armstrong), it’s genuinely hard to measure whether simulation workshops durably change decision-making. What we can say is that we’re working to solve the underlying problem: decision-makers lacking a felt sense of the competitive pressures that compromise safety.

Please do support us if you can. We are a small team having an outsized impact with a somewhat unique contribution to the field. All of our models are open so that other governance researchers can build on them.

See our live campaign here.

No comments.