Agree. Also the very idea of a “hostile AGI” being able to exist assumes a bunch of things.
Notably :
(1) humans build large powerful models with the cognitive capacity to even be hostile
(2) it is possible for a model to be optimized to actually run on stolen computers. This may be flat impossible due to fundamental limits on computation
(3) once humans learn of the escaped hostile model they are ineffective in countermeasures or actually choose to host the model and trade with it instead of licensing more limited safer models
(4) the hostile model is many times more intelligent than safer cognitively restricted models
(5) intelligence has meaningful benefits at very high levels, it doesn’t saturate, where “5000 IQ” is meaningfully stronger in real world conflicts than “500 IQ” where the weaker model has a large resource advantage
I don’t believe any of these 5 things are true based on my current knowledge, and all 5 must be true or AGI doom is not possible.
A relevant difference is that nuclear bombs already exist, and AGI do not…
Agree. Also the very idea of a “hostile AGI” being able to exist assumes a bunch of things.
Notably :
(1) humans build large powerful models with the cognitive capacity to even be hostile (2) it is possible for a model to be optimized to actually run on stolen computers. This may be flat impossible due to fundamental limits on computation (3) once humans learn of the escaped hostile model they are ineffective in countermeasures or actually choose to host the model and trade with it instead of licensing more limited safer models (4) the hostile model is many times more intelligent than safer cognitively restricted models (5) intelligence has meaningful benefits at very high levels, it doesn’t saturate, where “5000 IQ” is meaningfully stronger in real world conflicts than “500 IQ” where the weaker model has a large resource advantage
I don’t believe any of these 5 things are true based on my current knowledge, and all 5 must be true or AGI doom is not possible.