Yep, I think this is true. The point is that, given AI stays aligned which is stated there, the best thing for a country to do would be to accelerate capabilities. You’re right, however, that its not an argument against AI being an existential threat (I’ll make a note to make this more clear) — it’s more a point for acceleration.
The section “International Game Theory” does not seem to me like an argument against AI as an existential risk.
If the USA and China decide to have a non-cooperative AI race, my sense is that this would increase existential risk rather than reduce it.
Yep, I think this is true. The point is that, given AI stays aligned which is stated there, the best thing for a country to do would be to accelerate capabilities. You’re right, however, that its not an argument against AI being an existential threat (I’ll make a note to make this more clear) — it’s more a point for acceleration.