I used to be concerned about this a lot from a “what if it sparks nuclear war” POV, and I suppose I still am pretty concerned about that on some level. However, to be brutally honest I think that one of my main paradigms for evaluating geopolitics and risk has increasingly shifted to focusing on just AI risk, with a little biosecurity sprinkled in.
For example: if China invaded Taiwan would it set back AI capability timelines (because TSMC—which produces I think a majority of leading edge semiconductors—might get scuttled), and/or will great power conflict incentivize military AI development which shortens timelines?
I used to be concerned about this a lot from a “what if it sparks nuclear war” POV, and I suppose I still am pretty concerned about that on some level. However, to be brutally honest I think that one of my main paradigms for evaluating geopolitics and risk has increasingly shifted to focusing on just AI risk, with a little biosecurity sprinkled in.
For example: if China invaded Taiwan would it set back AI capability timelines (because TSMC—which produces I think a majority of leading edge semiconductors—might get scuttled), and/or will great power conflict incentivize military AI development which shortens timelines?