Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Great post! I think this is a valuable introduction to an uncertain and rapidly developing situation.
This seems a bit off. It seems to imply that ASML makes all or most of the machinery in the manufacturing process, and/or that ASML is the only company in the space. I think it would be more correct to say that ASML is the only company that makes the most advanced photolithography machines, and that photolithography is a key and necessary part of the chip fabrication process. (Other photolithography manufacturers—Nikon (Japan), Canon (Japan), and SMEE (China) -- cannot produce EUV photolithography machines, and also seem to produce substantially worse machines than ASML of older types. So it is true that ASML is broadly dominant in photolithography as a whole.)
Similarly, it is misleading/ambiguous to say that Japan controls photolithography—perhaps the sentence is meant to say that Japan controls some photolithography materials, like photoresists?
Thanks for these corrections! You’re right. I’ll make a few quick edits for now, and try to update it properly later (digging into the CSET report again).
Executive summary: Based on ML models, publications, patents, talent, investments, and supply chain factors, the United States and its allies seem to have a significant lead in AI development over China.
Key points:
The top AI labs, breakthroughs, and models come from the U.S. and allies. China leads in publications and patents but trails in quality.
The U.S. invests the most in AI and has more access to top talent.
The semiconductor supply chain is dominated by the U.S. and allies. Export controls will likely limit China’s access.
Censorship and other factors may hinder AI progress in China.
Slowing AI progress in the U.S. would likely also slow progress in China, as China relies significantly on research from abroad.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I wonder how large the USSRs fission research budget—the total across every institution—was prior to the Trinity test. My point is that this model is a lagging indicator. If the Trinity equivalent already happened then you will see a spike of research, a clean out of plagiarized results, and real effort from China.
There are routes that would allow Chinese labs to match current top Western labs and even proceed forward if they had full nation support. The obvious routes are espionage- historically this was how the USSR caught up on fission—and RSI. (Which requires an immense quantity of silicon. It’s as crucial as finding a source of uranium ore that the USSR faced for China to catch up on silicon fabrication if it is possible to do so)
It’s hard to say if Trinity has happened. Gpt-4 is strong but it’s not undeniably capable and it makes frequent errors. Maybe the next major model will be that moment.
I don’t know how much AI slowdown the west can afford, but maybe it should focus on measures that won’t lead to a significant deceleration. For example simply disallowing large interconnected GPU clusters in data centers unregistered with the government would be a start. Logging the users of the hardware, the source of the funds, their human contact information, and how much compute they are using would be another.
Requiring better cyber security especially on prototype AI systems is another measure that wouldn’t cost much or slow things down but would improve safety.
This post might need to be updated as China made a big pace in terms of regulations : At the third Belt and Road Forum for International Cooperation, President Xi announced the ‘Global AI Governance Initiative’ (GAIGI) for participating countries of the Belt and Road Initiative (i.e. China’s $1 trillion global infrastructure program).
I appreciate posts that provide concise comparative overviews of complex concepts. I have some questions that may seem basic to some, but I’d love to receive answers nonetheless.
OpenPhil, DeepMind, and Meta are leading labs in AI development, both empirically (e.g., ChatGPT) and financially (in terms of resources). China is known for its ability to replicate existing research rather than creating it. Given the concerns about AI and AGI development, particularly the risks of extinction, why do these American and British labs continue their AI work without pausing? Is there external pressure from governments or other nations that might be hostile? I’m trying to understand if there are motivations beyond just capitalizing on AI’s current momentum, similar to some scientists during the development of the A-bomb who pursued it for personal fame and scientific curiosity while disregarding risks.
Additionally, although this may not directly relate to your post, have we considered that the emphasis on AI safety, while creating more jobs in that field, might actually stimulate AI growth and increase the risks of extinction? There’s a shared sentiment in the Effective Altruism (EA) community that more people are joining out of interest in AI (safety or otherwise), as it serves as a hub for discussions and funding related to AI. These newcomers might face a dilemma: Are they willing to work for the greater good, even if it means pausing AI development and potentially affecting their livelihoods? How committed are they to their values when it comes to reducing job opportunities and growth in their passionate field? I apologize if this isn’t the ideal platform for these discussions, but they are infrequently addressed in the forum, and I thought they might relate to the topic of talent in AI.
Edit : all I do is asking genuine questions and I’m being downvoted to hell. If you disagree with the usefulness of the questions tick the ‘I disagree’ box(and even that why do care that my question are being answered?), but downvoting me just screams ‘I refuse criticism on this topic and such questions should’nt be answered’. Which is not honest nor rational, and I’m quite sure that those who downvoted me pride themselves a great deal of being overly rational.