Executive summary: The author argues in favor of an international moratorium on developing artificially intelligent systems until they can be proven safe, responding to common objections.
Key points:
A moratorium would require AI systems to undergo safety reviews before release, not ban AI entirely. It could fail in various ways but would likely still slow dangerous AI proliferation.
Failure may not make things much worse—existing initiatives could continue and treaties can be amended. Doing nothing risks an AI arms race.
Success will not necessarily lead to dictatorship or permanently halt progress. Safe systems would be allowed and treaties can evolve if no longer relevant.
The benefits of AI do not justify rushing development without appropriate safeguards against existential risks.
The evidence for AI risk is not yet definitive but negotiating safety mechanisms takes time, so discussions should begin before it is too late.
Differences are largely predictive, not values-based—optimism versus pessimism about easy alignment. Evidence may lead to agreement over time with open-mindedness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author argues in favor of an international moratorium on developing artificially intelligent systems until they can be proven safe, responding to common objections.
Key points:
A moratorium would require AI systems to undergo safety reviews before release, not ban AI entirely. It could fail in various ways but would likely still slow dangerous AI proliferation.
Failure may not make things much worse—existing initiatives could continue and treaties can be amended. Doing nothing risks an AI arms race.
Success will not necessarily lead to dictatorship or permanently halt progress. Safe systems would be allowed and treaties can evolve if no longer relevant.
The benefits of AI do not justify rushing development without appropriate safeguards against existential risks.
The evidence for AI risk is not yet definitive but negotiating safety mechanisms takes time, so discussions should begin before it is too late.
Differences are largely predictive, not values-based—optimism versus pessimism about easy alignment. Evidence may lead to agreement over time with open-mindedness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.