I wonder how large the USSRs fission research budget—the total across every institution—was prior to the Trinity test. My point is that this model is a lagging indicator. If the Trinity equivalent already happened then you will see a spike of research, a clean out of plagiarized results, and real effort from China.
There are routes that would allow Chinese labs to match current top Western labs and even proceed forward if they had full nation support. The obvious routes are espionage- historically this was how the USSR caught up on fission—and RSI. (Which requires an immense quantity of silicon. It’s as crucial as finding a source of uranium ore that the USSR faced for China to catch up on silicon fabrication if it is possible to do so)
It’s hard to say if Trinity has happened. Gpt-4 is strong but it’s not undeniably capable and it makes frequent errors. Maybe the next major model will be that moment.
I don’t know how much AI slowdown the west can afford, but maybe it should focus on measures that won’t lead to a significant deceleration. For example simply disallowing large interconnected GPU clusters in data centers unregistered with the government would be a start. Logging the users of the hardware, the source of the funds, their human contact information, and how much compute they are using would be another.
Requiring better cyber security especially on prototype AI systems is another measure that wouldn’t cost much or slow things down but would improve safety.
I wonder how large the USSRs fission research budget—the total across every institution—was prior to the Trinity test. My point is that this model is a lagging indicator. If the Trinity equivalent already happened then you will see a spike of research, a clean out of plagiarized results, and real effort from China.
There are routes that would allow Chinese labs to match current top Western labs and even proceed forward if they had full nation support. The obvious routes are espionage- historically this was how the USSR caught up on fission—and RSI. (Which requires an immense quantity of silicon. It’s as crucial as finding a source of uranium ore that the USSR faced for China to catch up on silicon fabrication if it is possible to do so)
It’s hard to say if Trinity has happened. Gpt-4 is strong but it’s not undeniably capable and it makes frequent errors. Maybe the next major model will be that moment.
I don’t know how much AI slowdown the west can afford, but maybe it should focus on measures that won’t lead to a significant deceleration. For example simply disallowing large interconnected GPU clusters in data centers unregistered with the government would be a start. Logging the users of the hardware, the source of the funds, their human contact information, and how much compute they are using would be another.
Requiring better cyber security especially on prototype AI systems is another measure that wouldn’t cost much or slow things down but would improve safety.