Fair point, the answer is unclear and could change. The most important fact IMO is that two of the leading AGI companies in the US, OpenAI and Deepmind, are explicitly concerned with x-risk and have invested seriously in safety. (Not as much as I’d like, but significant investments.) I’d rather those companies reach AGI than others who don’t care about safety. They’re US-based and benefit relative to Chinese companies from US policy that slows China.
Second, while I don’t think Joe Biden thinks or cares about AI x-risk, I do think US policymakers are more likely to be convinced of the importance of AI x-risk. Most of the people arguing for AI risk are English speaking, and I think they’re gaining some traction. Some evidence:
The Catastrophic Risk Management Act introduced by Senators Portman and Peters is clearly longtermist in motivation. From the act: “Not later than 1 year after the date of enactment of this Act, the President, with support from the committee, shall conduct and submit to Congress a detailed assessment of global catastrophic and existential risk.” Several press releases explicitly mentioned risks from advanced AI, though not the alignment problem. This seems indicative of longtermism and EAs gaining traction in DC.
https://www.congress.gov/bill/117th-congress/senate-bill/4488/text
The National Security Commission on AI commissioned by Congress in 2018 did not include x-risk in their report, which is disappointing. That group, led by Eric Schmidt former CEO of Google, has continued their policy advocacy as the Special Competitive Studies Project. They are evidently aware of x-risk concerns, as they cited Holden Karnofsky’s writeup of the most important century hypothesis. Groups like these seem like they could be persuaded of the x-risk hypothesis, and could successfully advocate sensible policy to the US government.
https://www.scsp.ai/reports/mid-decade-challenges-for-national-competitiveness/preface/
Finally, there are think tanks who explicitly care about AI x-risk. My understanding is that CSET and CNAS are the two leaders, but the strong EA grantmaking system could easily spur more and more successful advocacy.
On the other hand, I’m unaware of a single major group in China that professes to care about x-risk from AI. I might not know if they did exist, so if there’s any evidence I’d love to hear it. China does seem to have much stronger regulatory skills, and would probably be better at implementing compute controls and other “pivotal acts”. But without a channel to communicate why they should do so, I’m skeptical that they will.
The application form is showing up as private for me. Very cool idea though, the success of Eleuther and Stability suggests that this is a viable model. Excited to see it unfold and hopefully contribute!