Thanks for this post—I’ve learned things about the AI safety community that I didn’t realize before. I wonder if much of the value of external criticism isn’t in changing the behavior of those being criticized, but rather in explicitly stating and making into common knowledge negative factors that by default are not talked about publically as much. (Both for future projects to do things differently, and for people today to update about how to relate to the entities involved).
There does seem to be non-negligible content in the references to hits-based giving and the lower funding bar, but otherwise I agree.
It’s something that was recently invented on Twitter, here is the manifesto they wrote: https://swarthy.substack.com/p/effective-accelerationism-eacc?s=wIt’s only believed by a couple people afaict, and unironically maybe by no one (although this doesn’t make it unimportant!)