Executive summary: The MIRI Technical Governance Team outlines four strategic scenarios for navigating the risks of advanced AI and argues that building global Off Switch infrastructure to enable a coordinated Halt in frontier AI development is the most credible path to avoiding extinction, while also presenting a broad research agenda to support this goal.
Key points:
Four strategic scenarios—Light-Touch, US National Project, Threat of Sabotage, and Off Switch/Halt—map potential geopolitical trajectories in response to the emergence of artificial superintelligence (ASI), with varying risks of misuse, misalignment, war, and authoritarian lock-in.
The Off Switch and Halt scenario is preferred because it allows for coordinated global oversight and pausing of dangerous AI development, minimizing Loss of Control risk and enabling cautious, safer progress.
The default Light-Touch path is seen as highly unsafe, with inadequate regulation, fast proliferation, and high risks of catastrophic misuse, making it an untenable long-term strategy despite being easy to implement.
The US National Project could reduce some risks but introduces others, including global instability, authoritarian drift, and alignment failures, especially under arms race conditions.
Threat of Sabotage offers a fragile and ambiguous form of stability, relying on mutual interference to slow AI progress, but raises concerns about escalation and is seen as less viable than coordinated cooperation.
The research agenda targets scenario-specific and cross-cutting questions, such as how to monitor compute, enforce a halt, structure international agreements, and assess strategic viability—encouraging broad participation from the AI governance ecosystem.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The MIRI Technical Governance Team outlines four strategic scenarios for navigating the risks of advanced AI and argues that building global Off Switch infrastructure to enable a coordinated Halt in frontier AI development is the most credible path to avoiding extinction, while also presenting a broad research agenda to support this goal.
Key points:
Four strategic scenarios—Light-Touch, US National Project, Threat of Sabotage, and Off Switch/Halt—map potential geopolitical trajectories in response to the emergence of artificial superintelligence (ASI), with varying risks of misuse, misalignment, war, and authoritarian lock-in.
The Off Switch and Halt scenario is preferred because it allows for coordinated global oversight and pausing of dangerous AI development, minimizing Loss of Control risk and enabling cautious, safer progress.
The default Light-Touch path is seen as highly unsafe, with inadequate regulation, fast proliferation, and high risks of catastrophic misuse, making it an untenable long-term strategy despite being easy to implement.
The US National Project could reduce some risks but introduces others, including global instability, authoritarian drift, and alignment failures, especially under arms race conditions.
Threat of Sabotage offers a fragile and ambiguous form of stability, relying on mutual interference to slow AI progress, but raises concerns about escalation and is seen as less viable than coordinated cooperation.
The research agenda targets scenario-specific and cross-cutting questions, such as how to monitor compute, enforce a halt, structure international agreements, and assess strategic viability—encouraging broad participation from the AI governance ecosystem.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.