I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.