Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
Unless I am very high up and in the core EA group, I am unlikely to be listened to
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.
Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.