Executive summary: AI governance needs explicit “theories of victory” that describe desired end states of existential security and strategies to achieve them, with three potential approaches being an AI moratorium, an “AI Leviathan”, or defensive acceleration.
Key points:
A theory of victory for AI governance should include an existentially secure endgame that preserves optionality, and a plausible strategy to achieve it.
An AI moratorium would prevent development beyond a threshold, but faces major coordination challenges.
An “AI Leviathan” would use the first transformative AI to enforce a monopoly, but risks lock-in of mistakes.
Defensive acceleration aims to outpace offensive AI capabilities with defensive ones, but requires careful technological development.
Nuclear weapons history offers relevant precedents, though AI differs in being dual-use and potentially self-improving.
AI governance actors should make their preferred theories of victory explicit to enable open discussion and examination.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: AI governance needs explicit “theories of victory” that describe desired end states of existential security and strategies to achieve them, with three potential approaches being an AI moratorium, an “AI Leviathan”, or defensive acceleration.
Key points:
A theory of victory for AI governance should include an existentially secure endgame that preserves optionality, and a plausible strategy to achieve it.
An AI moratorium would prevent development beyond a threshold, but faces major coordination challenges.
An “AI Leviathan” would use the first transformative AI to enforce a monopoly, but risks lock-in of mistakes.
Defensive acceleration aims to outpace offensive AI capabilities with defensive ones, but requires careful technological development.
Nuclear weapons history offers relevant precedents, though AI differs in being dual-use and potentially self-improving.
AI governance actors should make their preferred theories of victory explicit to enable open discussion and examination.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.