Your comments are very blunt and challenging for EAs, but they are, I think, very accurate in many cases.
The AGI-accelerationists are at the very center of AGI X-risk—not least because many of them see human extinction as a positively good thing. In a very real sense, they are the X risk—and the ASI they crave is just the tool they want to be able to use to make humanity obsolete (and then gone entirely).
And, as you point out, the e/acc enthusiasts often have no epistemic standards at all, and are willing to use any rhetoric they think will be useful (e.g. ‘If you don’t support American AI hegemony, you want Chinese AI hegemony’; ‘If you don’t support AGI, you want everyone to die without the longevity drugs AGI could discover’; ‘If you oppose AGI, you’re a historically & economically ignorant luddite’, etc.)
Your comments are very blunt and challenging for EAs, but they are, I think, very accurate in many cases.
The AGI-accelerationists are at the very center of AGI X-risk—not least because many of them see human extinction as a positively good thing. In a very real sense, they are the X risk—and the ASI they crave is just the tool they want to be able to use to make humanity obsolete (and then gone entirely).
And, as you point out, the e/acc enthusiasts often have no epistemic standards at all, and are willing to use any rhetoric they think will be useful (e.g. ‘If you don’t support American AI hegemony, you want Chinese AI hegemony’; ‘If you don’t support AGI, you want everyone to die without the longevity drugs AGI could discover’; ‘If you oppose AGI, you’re a historically & economically ignorant luddite’, etc.)