Given your position I am concerned about the arms race accelerationism messaging in this post. Substantively, the major claims of this post are “China AI progress poses a serious threat we must overcome via AI progress (that is, we are in an arms race)” and “society may regulate AI such that projects that don’t meet a very high standard of safety will not be deployable”. The argument is that pursuing safety follows from these premises, mostly the latter.
This can be interpreted in a number of ways, charitably or uncharitably. Independent of that, I do not think it is really a good idea to talk this way about AI, re: geopolitics. It has a very bad track record with other stuff such as nukes, and I’m not sure who the intended audience is (are capabilities CEOs China hawks who can only be convinced to slow down if framed in terms of beating China? big if true)
Given your position I am concerned about the arms race accelerationism messaging in this post. Substantively, the major claims of this post are “China AI progress poses a serious threat we must overcome via AI progress (that is, we are in an arms race)” and “society may regulate AI such that projects that don’t meet a very high standard of safety will not be deployable”. The argument is that pursuing safety follows from these premises, mostly the latter.
This can be interpreted in a number of ways, charitably or uncharitably. Independent of that, I do not think it is really a good idea to talk this way about AI, re: geopolitics. It has a very bad track record with other stuff such as nukes, and I’m not sure who the intended audience is (are capabilities CEOs China hawks who can only be convinced to slow down if framed in terms of beating China? big if true)