This is an interesting stategic consideration! Thanks for writing it up.
Note that the probability of AsianTAI/AsianAwarenessNeeded depends on whether or not there is an AI risk hub in Asia. In the extreme, if you expect making aligned AI to take much longer than unaligned AI, then making Asia concerened about AI risk might drive the probability of AsianTAI close to 0. Given how rough the model is, I don’t think this matters that much.
This is an interesting stategic consideration! Thanks for writing it up.
Note that the probability of AsianTAI/AsianAwarenessNeeded depends on whether or not there is an AI risk hub in Asia. In the extreme, if you expect making aligned AI to take much longer than unaligned AI, then making Asia concerened about AI risk might drive the probability of AsianTAI close to 0. Given how rough the model is, I don’t think this matters that much.