It’s worth pointing out that the assumption that growing social capital comes at the cost of truth-seeking is not necessarily true. Sometimes the general public is correct, and your group of core members is wrong. A group that is insular, homogenous, and unwelcoming to new members can risk having incorrect beliefs and assumptions frozen in place, whereas one that is welcoming to newcomers with diverse skills,backgrounds and beliefs is more likely to have poor assumptions and beliefs challenged.
I agree, taking your words literally, with all their qualifications (“can risk,” “not necessarily true,” “sometimes,” etc). There are a few other important caveats I think EA needs to keep in mind.
“Newcomer” is a specific role in the group, which can range from a hostile onlooker, to an loud and incompetent notice, to a neutral observer, to a person using the group for status or identity rather than to contribute to its mission, to a friendly and enthusiastic participant, to an expert in the topic area who hasn’t been part of the specific group before.
Being welcoming to newcomers does not mean tolerating destructive behavior, and it can absolutely encompass vigorous and ongoing acculturation of the newcomer to the group, requiring them to conform to the group’s expectations in order to preserve the group’s integrity.
In order to avoid such efforts of acculturation resulting in total conformism and cultish behavior, the group needs to find a way to enact it that is professional and limited to specific, appropriate domains.
Generally, the respect that a participant has earned from the group is an important determinant of how seriously their proposals for change will be taken. They should expect that most of the ideas they have for change will be bad ones, and that the group knows better, until they’ve spent time learning and understanding why things are done the way they are (see Chesterton’s Fence). Over time, they will gain the ability to make more useful proposals and be entrusted with greater independence and responsibility.
In EA, I think we have foolishly focused WAY too much on inviting newcomers in, and completely failed at acculturation. We also lack adequate infrastructure to build the intimate working relationships that allow bonds of individual trust and respect to develop and structure the group as a whole. Those pods of EAs who have managed to do this are typically those working in EA orgs, and they get described as “insular” because they haven’t managed to integrate their local networks with the broader EA space.
I don’t see a need to be more blandly tolerant of whatever energy newcomers are bringing to the table in EA. Instead, I think we need very specific interventions to build one-on-one, long-term, working relationships between less and more experienced EA individuals, and a better way to update the rest of EA on the behavior and professional accomplishments of EAs. Right now, we appear to be largely dependent on hostile critics to provide this informational feedback loop, and it’s breaking our brains. At the same time, we’ve spent the last few years scaling up EA participation so much without commeasurate efforts at acculturation, and that has badly shrunk our capacity to do so, possibly without the ability to recover.
I absolutely agree! To put it more plainly I intuit that this distinction is a core cause of tension in the EA community, and is the single most important discussion to have with regards to how EA plans to grow our impact over time.
I’ve come down on the side of social capital not because I believe the public is always right, or that we should put every topic to a sort of ‘wisdom of the crowds’ referendum. I actually think that a core strength of EA and rationalism in general is the refusal to accept popular consensus on face value.
Over time it seems from my perspective that EA has leaned too far in the direction of supporting outlandish and difficult to explain cause areas, without giving any thought to convincing the public of these arguments. AI Safety is a great example here. Regardless of your AI timelines or priors on how likely AGI is to come about, it seems like a mistake to me that so much AI Safety research and discussion is gated. Most of the things EA talks about with regard to the field would absolutely freak out the general public—I know this from running a local community organization.
In the end if we want to grow and become an effective movement, we have to at least optimize for attracting workers in tech, academia, etc. If many of our core arguments cease to be compelling to these groups, we should take a look at our messaging and try to keep the core of the idea while tweaking how it’s communicated.
″ Most of the things EA talks about with regard to the field would absolutely freak out the general public”—This is precisely what worries me and presumably others in the field. Freaking people out is a great way of making them take wild, impulsive actions that are equally likely to be net-negative as net-positive. Communication with the public should probably aim to not freak them out.
EA is currently growing relatively fast, so I suspect that the risk of insularity is overrated for now. However, this is a concern that I would have to take more seriously if recent events were to cause movement growth to fall off a cliff.
It’s worth pointing out that the assumption that growing social capital comes at the cost of truth-seeking is not necessarily true. Sometimes the general public is correct, and your group of core members is wrong. A group that is insular, homogenous, and unwelcoming to new members can risk having incorrect beliefs and assumptions frozen in place, whereas one that is welcoming to newcomers with diverse skills,backgrounds and beliefs is more likely to have poor assumptions and beliefs challenged.
I agree, taking your words literally, with all their qualifications (“can risk,” “not necessarily true,” “sometimes,” etc). There are a few other important caveats I think EA needs to keep in mind.
“Newcomer” is a specific role in the group, which can range from a hostile onlooker, to an loud and incompetent notice, to a neutral observer, to a person using the group for status or identity rather than to contribute to its mission, to a friendly and enthusiastic participant, to an expert in the topic area who hasn’t been part of the specific group before.
Being welcoming to newcomers does not mean tolerating destructive behavior, and it can absolutely encompass vigorous and ongoing acculturation of the newcomer to the group, requiring them to conform to the group’s expectations in order to preserve the group’s integrity.
In order to avoid such efforts of acculturation resulting in total conformism and cultish behavior, the group needs to find a way to enact it that is professional and limited to specific, appropriate domains.
Generally, the respect that a participant has earned from the group is an important determinant of how seriously their proposals for change will be taken. They should expect that most of the ideas they have for change will be bad ones, and that the group knows better, until they’ve spent time learning and understanding why things are done the way they are (see Chesterton’s Fence). Over time, they will gain the ability to make more useful proposals and be entrusted with greater independence and responsibility.
In EA, I think we have foolishly focused WAY too much on inviting newcomers in, and completely failed at acculturation. We also lack adequate infrastructure to build the intimate working relationships that allow bonds of individual trust and respect to develop and structure the group as a whole. Those pods of EAs who have managed to do this are typically those working in EA orgs, and they get described as “insular” because they haven’t managed to integrate their local networks with the broader EA space.
I don’t see a need to be more blandly tolerant of whatever energy newcomers are bringing to the table in EA. Instead, I think we need very specific interventions to build one-on-one, long-term, working relationships between less and more experienced EA individuals, and a better way to update the rest of EA on the behavior and professional accomplishments of EAs. Right now, we appear to be largely dependent on hostile critics to provide this informational feedback loop, and it’s breaking our brains. At the same time, we’ve spent the last few years scaling up EA participation so much without commeasurate efforts at acculturation, and that has badly shrunk our capacity to do so, possibly without the ability to recover.
Excellent points here. I think this is close to what I am trying to get at.
I agree that we shouldn’t just open the floodgates and invite anyone and everyone.
I absolutely agree! To put it more plainly I intuit that this distinction is a core cause of tension in the EA community, and is the single most important discussion to have with regards to how EA plans to grow our impact over time.
I’ve come down on the side of social capital not because I believe the public is always right, or that we should put every topic to a sort of ‘wisdom of the crowds’ referendum. I actually think that a core strength of EA and rationalism in general is the refusal to accept popular consensus on face value.
Over time it seems from my perspective that EA has leaned too far in the direction of supporting outlandish and difficult to explain cause areas, without giving any thought to convincing the public of these arguments. AI Safety is a great example here. Regardless of your AI timelines or priors on how likely AGI is to come about, it seems like a mistake to me that so much AI Safety research and discussion is gated. Most of the things EA talks about with regard to the field would absolutely freak out the general public—I know this from running a local community organization.
In the end if we want to grow and become an effective movement, we have to at least optimize for attracting workers in tech, academia, etc. If many of our core arguments cease to be compelling to these groups, we should take a look at our messaging and try to keep the core of the idea while tweaking how it’s communicated.
″ Most of the things EA talks about with regard to the field would absolutely freak out the general public”—This is precisely what worries me and presumably others in the field. Freaking people out is a great way of making them take wild, impulsive actions that are equally likely to be net-negative as net-positive. Communication with the public should probably aim to not freak them out.
EA is currently growing relatively fast, so I suspect that the risk of insularity is overrated for now. However, this is a concern that I would have to take more seriously if recent events were to cause movement growth to fall off a cliff.