I’m not necessarily representing this point of view myself, but I think the idea is that any alignment scenario — alignment with any human or group of humans — would be a triumph compared to “doom”.
I do think that in practice if the alignment problem is solved, then yes, whoever gets there first would get to decide. That might not be as bad as you think, though; China is repressive in order to maintain social control, but that repression wouldn’t necessarily be a prerequisite to social control in a super-AGI scenario.
I did not downvote your post.
I’m not necessarily representing this point of view myself, but I think the idea is that any alignment scenario — alignment with any human or group of humans — would be a triumph compared to “doom”.
I do think that in practice if the alignment problem is solved, then yes, whoever gets there first would get to decide. That might not be as bad as you think, though; China is repressive in order to maintain social control, but that repression wouldn’t necessarily be a prerequisite to social control in a super-AGI scenario.