Executive summary: A model shows that as the number of agents with access to potentially catastrophic technologies grows exponentially, the probability of a global catastrophe quickly becomes almost inevitable, possibly within a decade.
Key points:
The model assumes an exponentially growing number of agents with access to a potentially dangerous technology, and an exponentially increasing probability of the technology causing a catastrophe.
Plugging in plausible numbers, the model predicts the probability of a global catastrophe grows from near zero to near certainty within around 10 years.
The model could apply to risks from synthetic biology and artificial intelligence, which are progressing very rapidly.
Smaller catastrophes that halt technological progress could potentially prevent a larger, extinction-level catastrophe.
Preventing the catastrophic outcome may require a powerful global control system to limit agents’ access to dangerous technologies.
Potential ways such a control system could emerge include a superintelligent AI singleton, strengthened global governance, or a single country achieving technological supremacy.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: A model shows that as the number of agents with access to potentially catastrophic technologies grows exponentially, the probability of a global catastrophe quickly becomes almost inevitable, possibly within a decade.
Key points:
The model assumes an exponentially growing number of agents with access to a potentially dangerous technology, and an exponentially increasing probability of the technology causing a catastrophe.
Plugging in plausible numbers, the model predicts the probability of a global catastrophe grows from near zero to near certainty within around 10 years.
The model could apply to risks from synthetic biology and artificial intelligence, which are progressing very rapidly.
Smaller catastrophes that halt technological progress could potentially prevent a larger, extinction-level catastrophe.
Preventing the catastrophic outcome may require a powerful global control system to limit agents’ access to dangerous technologies.
Potential ways such a control system could emerge include a superintelligent AI singleton, strengthened global governance, or a single country achieving technological supremacy.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.