There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to ‘Suggested Reforms’):
Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Unless I am very high up and in the core EA group, I am unlikely to be listened to
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
Unless I am very high up and in the core EA group, I am unlikely to be listened to
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to ‘Suggested Reforms’):
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of theory of change suggests a low regard of the EA leadership to the opinions of the EA community overall (regardless of agreement to any specific element of the original post)
Unless I am very high up and in the core EA group, I am unlikely to be listened to
While EA is open to criticism in theory, it is not open to changing based on criticism as the leadership has already reasoned about this as much as they are going to
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
Thanks for your sincere reply (I’m not trying to say other people aren’t sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
As I said in my comment, I think that it’s true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don’t necessarily get along or agree on things—for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.)
Not speaking for anyone else here, but it’s totally true that I have a pretty low regard for the quality of the average EA Forum comment/post, and don’t think of the EA Forum as a place where I go to hear good ideas about ways EA could be different (though occasionally people post good content here).
For whatever it’s worth, in my experience, people who show up in EA and start making high-quality contributions quickly get a reputation among people I know for having useful things to say, even if they don’t have any social connection.
I gave a talk yesterday where someone I don’t know made some objections to an argument I made, and I provisionally changed my mind about that argument based on their objections.
I think “criticism” is too broad a category here. I think it’s helpful to provide novel arguments or evidence. I also think it’s helpful to provide overall high-level arguments where no part of the argument is novel, but it’s convenient to have all the pieces in one place (e.g. Katja Grace on slowing down AI). I (perhaps foolishly) check the EA Forum and read/skim potentially relevant/interesting articles, so it’s pretty likely that I end up reading your stuff and thinking about it at least a little.
You’re right that my actions are less influenced by my opinions on the topics raised in this post than community building people’s are (though questions about e.g. how much to value external experts are relevant to me). On the other hand, I am a stakeholder in EA culture, because capacity for object-level work is the motivation for community building.