Zach writes in an email: “Much/most of my concern about China isn’t China has worse values than US or even Chinese labs are less safe than Western labs but rather it’s better for leading labs to be friendly with each other (mostly to better coordinate and avoid racing near the end), so (a) it’s better for there to be fewer leading labs and (b) given that there will be Western leading labs it’s better for all leading labs to be in the West, and ideally in the US […]
I’m curious why Zach thinks that it would be ideal for leading AI labs to be in the US. I tried to consider this from the lens of regulation. I haven’t read extensively on comparisons of what regulations there are for AI in various countries, but my impression is that the US federal government is sitting on their laurels with respect to regulation of AI, although state and municipal governments provide a somewhat different picture, and whilst the intentions of each are different, the EU and the UK have been moving much more swiftly than the US government.
My opinion would change if regulation doesn’t play a large role in how successful an AI pause is, eg if industry players could voluntarily practice restraint. There are also other factors that I’m not considering.
I’m curious why Zach thinks that it would be ideal for leading AI labs to be in the US. I tried to consider this from the lens of regulation. I haven’t read extensively on comparisons of what regulations there are for AI in various countries, but my impression is that the US federal government is sitting on their laurels with respect to regulation of AI, although state and municipal governments provide a somewhat different picture, and whilst the intentions of each are different, the EU and the UK have been moving much more swiftly than the US government.
My opinion would change if regulation doesn’t play a large role in how successful an AI pause is, eg if industry players could voluntarily practice restraint. There are also other factors that I’m not considering.