There is not trade off: social estabilization and international pacification are main tools to reduce existencial risk, which in my view mainly comes from nuclear war.
The counter argument is that nuclear war was maybe never existential. That even at cold war peaks, there were not enough warheads and missiles and they were not targeted on enough industrial centers on earth to extinct humanity. Entire continents and global regions would have been uninvolved in the war (Africa, south America) and had enough copies of human knowledge and tools to respond to the new reality. Even global cooling scenarios ignore humans building makeshift greenhouses or other countermeasures.
Theoretically a hostile AGI with unlimited access to self replicating machinery, and freedom from interference from human militaries with comparable technology, could locate and kill every human on earth.
Agree. Also the very idea of a “hostile AGI” being able to exist assumes a bunch of things.
Notably :
(1) humans build large powerful models with the cognitive capacity to even be hostile
(2) it is possible for a model to be optimized to actually run on stolen computers. This may be flat impossible due to fundamental limits on computation
(3) once humans learn of the escaped hostile model they are ineffective in countermeasures or actually choose to host the model and trade with it instead of licensing more limited safer models
(4) the hostile model is many times more intelligent than safer cognitively restricted models
(5) intelligence has meaningful benefits at very high levels, it doesn’t saturate, where “5000 IQ” is meaningfully stronger in real world conflicts than “500 IQ” where the weaker model has a large resource advantage
I don’t believe any of these 5 things are true based on my current knowledge, and all 5 must be true or AGI doom is not possible.
There is not trade off: social estabilization and international pacification are main tools to reduce existencial risk, which in my view mainly comes from nuclear war.
https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of
The counter argument is that nuclear war was maybe never existential. That even at cold war peaks, there were not enough warheads and missiles and they were not targeted on enough industrial centers on earth to extinct humanity. Entire continents and global regions would have been uninvolved in the war (Africa, south America) and had enough copies of human knowledge and tools to respond to the new reality. Even global cooling scenarios ignore humans building makeshift greenhouses or other countermeasures.
Theoretically a hostile AGI with unlimited access to self replicating machinery, and freedom from interference from human militaries with comparable technology, could locate and kill every human on earth.
A relevant difference is that nuclear bombs already exist, and AGI do not…
Agree. Also the very idea of a “hostile AGI” being able to exist assumes a bunch of things.
Notably :
(1) humans build large powerful models with the cognitive capacity to even be hostile (2) it is possible for a model to be optimized to actually run on stolen computers. This may be flat impossible due to fundamental limits on computation (3) once humans learn of the escaped hostile model they are ineffective in countermeasures or actually choose to host the model and trade with it instead of licensing more limited safer models (4) the hostile model is many times more intelligent than safer cognitively restricted models (5) intelligence has meaningful benefits at very high levels, it doesn’t saturate, where “5000 IQ” is meaningfully stronger in real world conflicts than “500 IQ” where the weaker model has a large resource advantage
I don’t believe any of these 5 things are true based on my current knowledge, and all 5 must be true or AGI doom is not possible.