I recently published a blog post where I tried to assess China’s importance as a global actor on the path transformative AI. This was a relatively shallow dive, but I hope it will still be able to spark an interesting conversation on this topic, and/or inspire others to research this topic further.
The post is quite long (0ver 6,000 words), so I’ll copy and paste my bottom line takes, and (roughly) how confident I am in them after brief reflection:
China is, as of early 2023, overhyped as an AI superpower − 60%.
That being said, the reasons that they might emerge closer to the frontier, and the overall importance of positively shaping the development of AI, are enough to warrant a watchful eye on Chinese AI progress − 90%.
China’s recent AI research output, as it pertains to transformative AI, is not quite as impressive as headlines might otherwise suggest − 75%.
I suspect hardware difficulties, and structural factors that push top-tier researchers towards other countries, are two of China’s biggest hurdles in the short-to-medium term, and neither seem easily solvable − 60%.
It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) − 70%.
A second or third place China that lags the US and allies could still be important. Since AI progress has recently moved at a break-neck pace, being second place might only mean being a year or two behind — though I suspect this gap will increase as the technology matures − 65%.
I might be missing some important factors, and I’m not very certain about which are the most important when thinking about this question − 95%.
This was a really interesting and useful read! Posting the summary from the end of the post, as I found it helpful:
Kaiming He was at MSR in China when he invented ResNets in 2015. Residual connections are part of transformers, and probably the 2nd most important architectural breakthrough in modern Deep Learning.
You should make manifold markets for all these statements and put them in the comments.
Thanks—I only read this linkpost and Haydn’s comment quoting your summary, not the linked post as a whole, but this seems to me like probably useful work.
One nitpick:
I feel like it’d be more useful/clearer to say “It seems x% likely that the US will create transformative AI before China, and y% likely if TAI is developed in short(ish) timelines (next 5-15 years)”. Because:
At the moment, you’re saying it’s 70% likely that the US will be “much more likely”, i.e. giving a likelihood of a qualitatively stated (hence kind-of vague) likelihood.
And that claim itself seems to be kind-of but not exactly conditioned on short timelines worlds. Or maybe instead it’s a 70% chance of the conjunction of “the US is much more likely (not conditioning on timelines)” and “this is especially so if there are short timelines”. It’s not really clear which one.
And if it’s the conjunction, that seems less useful than knowing what odds you assign to each of the two claims separately.
Yeah, fair point. When I wrote this, I roughly followed this process:
Write article
Summarize overall takes in bullet points
Add some probabilities to show roughly how certain I am of those bullet points, where this process was something like “okay I’ll re-read this and see how confident I am that each bullet is true”
I think it would’ve been more informative if I wrote the bullet points with an explicit aim to add probabilities to them, rather than writing them and thinking after “ah yeah, I should more clearly express my certainty with these”.
This is three different claims. Which one are you 65% confident in?
I think I was just reading all of those claims together and trying to subjectively guess how likely I find them all to be. So to split them up, in order of each claim: 90%, 90%, 80%.