Thank you Stephen – really interesting to read. Keep up the good work.
Some quick thoughts.
1.
There was less discussion than I expected of mutual assured destruction type dynamics.
My uninformed intuition suggests that a largest source of risk of an actually existential catastrophe comes from scenarios where some actor has an incentive to design weapons that would be globally destructive and also to persuade the other side that they would use those weapons if attacked in order have a mutually assured destruction deterrent. (The easiest way to persuade your enemy that you would destroy the world if attacked is setting up systems to ensure that you would actually destroy the world if attacked).
I think in most scenarios actors incentives are to avoid designing weapons or using weapons that would destroy the world but in this scenario actors incentives are to towards designing weapons or using weapons that would destroy the world, which feels significant.
2.
Changing vulnerabilities. I also think it is possible that global vulnerability could significantly go up or down between now and when a conflict happens. Examples might be:
Other risks. Climate change has got bad and the solution is ongoing geoengineering. A war would interrupt that. Any expected outputs of the war on global weather systems (nuclear winter etc) might have to account for a much more volatile climate.
New technology. Imagine a world where everyone’s brain is neurolinked up with direct brain interfaces to the web and vulnerable to hacking, or a world where each superpower has a superintelligent AI or a world where it is possible to genetically engineer supersoliders or a world where elites could control the ethical beliefs of their population with advanced biotech, etc. All of these would the chance of an existential risk seems higher.
More interconnectivity and reliance on technology. As the world has evolved over time we have in many ways become more globally vulnerable to shocks and we might expect this to continue.
Thank you Stephen – really interesting to read. Keep up the good work.
Some quick thoughts.
1.
There was less discussion than I expected of mutual assured destruction type dynamics.
My uninformed intuition suggests that a largest source of risk of an actually existential catastrophe comes from scenarios where some actor has an incentive to design weapons that would be globally destructive and also to persuade the other side that they would use those weapons if attacked in order have a mutually assured destruction deterrent. (The easiest way to persuade your enemy that you would destroy the world if attacked is setting up systems to ensure that you would actually destroy the world if attacked).
I think in most scenarios actors incentives are to avoid designing weapons or using weapons that would destroy the world but in this scenario actors incentives are to towards designing weapons or using weapons that would destroy the world, which feels significant.
2.
Changing vulnerabilities. I also think it is possible that global vulnerability could significantly go up or down between now and when a conflict happens. Examples might be:
Other risks. Climate change has got bad and the solution is ongoing geoengineering. A war would interrupt that. Any expected outputs of the war on global weather systems (nuclear winter etc) might have to account for a much more volatile climate.
New technology. Imagine a world where everyone’s brain is neurolinked up with direct brain interfaces to the web and vulnerable to hacking, or a world where each superpower has a superintelligent AI or a world where it is possible to genetically engineer supersoliders or a world where elites could control the ethical beliefs of their population with advanced biotech, etc. All of these would the chance of an existential risk seems higher.
More interconnectivity and reliance on technology. As the world has evolved over time we have in many ways become more globally vulnerable to shocks and we might expect this to continue.