Why EA Community building
I have heard people who are uncertain about whether EA community building is the right move for them, given the increased prominence of AI Safety. I think that EA community building is the right choice for a significant number of people, and wanted to lay out why I believe this.
AI Safety Community building seems important
I’m excited to see AI Safety specific community building and I hope it continues to grow. This piece is not intended to claim that no-one should be working on AIS community building. Although CEA’s groups team is at an EA organisation, not an AI-safety organisation. I hope we can collaborate with AI Safety groups, as:
It would likely benefit both parties to synch on issues like data collection
I think there are lessons learned from EA community building that would be relevant and valuable to share
The reasons that I think the case for AI Safety community building is strong, are:
If we want people to work in AI Safety, directly talking about AI Safety seems the most straightforward way to do this
There are talented people who will find the AI Safety framing attractive, but would not like the EA framing
Early AIS community building efforts have managed to attract significant numbers of talented individuals (although I don’t think it’s inevitable that these early wins will scale, or successfully avoid causing accidental harm)
EA community building is also important
I think EA community building is still very valuable, for five reasons
EA groups have been successful
In the 2020 EA and Longtermist survey, local groups were mentioned by 42% of respondents
I care about EA values in decision makers during crunch time. E.g., I think people in the EA movement have thought unusually deeply about what catastrophes would and wouldn’t lead to the loss of humanity’s future potential
Having a compelling answer to the question “how do I do the most good” or “how do I live a good life” has been something that has historically attracted a lot of talent, and talent that would not necessarily have been attracted by AI Safety (conversely I expect AI Safety groups to attract people who wouldn’t be drawn to discussions of “how do I do the most good”)
Specific, talented, organisers can be a better fit for either AIS or EA, and I want both options to exist
I think for both options to exist, both options need to have great organisers. If all of the best organisers went for a single option, I think the other option would either become irrelevant, or cease to exist.
It seems important to note that it is still possible that the risks from AI don’t manifest in the way that EAs widely expect, in which case we’ll be glad that we have a network of people that care about EA ideas
I want to see collaboration between EA and AIS community building
Although this isn’t the reason I’d like to see AIS CB, there is an extra benefit: I think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)
Since the FTX crisis there has been increasing discussion about trustingness amongst EAs. Although I think the FTX crisis could have happened in less trusting communities (e.g., Many VCs also lost money in FTX) - I think it is true that there are areas where high trust is harmful. I think operating in an environment where EAs aren’t a super-majority would improve certain processes that currently overly rely on trust. Additionally I think having EA form a part of your identity can cause in-group effects, where ideas from the outgroup aren’t taken seriously enough. I suspect this would be lessened if people identifying as EA didn’t form a majority
Based on the above, EA community building should update some of the ways in which it works
I think this has some updates on how EA groups should operate (this was written with city and national groups in mind, but parts are more widely applicable)
At the top of funnel:
Targeting outreach: If there are also AI Safety groups operating in the same area as you, the counterfactual impact of attracting someone who is in the AI Safety groups target audience is lower. Nonetheless it remains (importantly) true that people other than machine learning experts can contribute to the world’s most pressing problems
Less “defensive” messaging—as EA moved from global health, to AI safety, the core EA principles remained the same, but the messaging changed. There was a need to show that these ideas aren’t “too weird”. As AI Safety is normalised, EA messaging should be more similar to how it was when EA was principally about global health interventions.
Note that less defensive messaging doesn’t mean jumping straight to AI. I think it’s important toBe serious about trying to work out what the most important thing is
Be open, early, that AIS is the best guess of many people right now
Don’t forget that AIS is not be right for everyone, and also we might be wrong about AIS
In a world where lots of people working on the most important problems don’t identify as EA, I believe that EA groups should, on the margin, have a greater focus on EA ideas, relative to EA community. I believe that community is important (see importance of personal connections here), and most groups that have produced a lot of value have both a focus on ideas AND a strong community. However I think (a) it is more common to over-focus, rather than under-focus on community, (b) a community built around the discussion of ideas is likely to attract people who can improve the world. Concretely, focussing on EA ideas might mean: Reducing the number of socials and increasing the number of fellowships, learning projects, discussions (importantly, there can be social elements to these)
In the rest of the funnel (with much lower confidence)
More partnerships/alliances
Partnerships can be a pipeline into discovering EA ideas, as well as a pipeline to important positions
Strengthens the idea of having people working on the most important problems rather than in EA institutions
I think EAs significantly outperform other communities when looking at bets on what are important issues for humanity (stemming from scout mindset and openness), and modestly outperform most other communities at forecasting. However I don’t think EAs have across the board good-judgement—and would benefit from partnering with people who have a good understanding of how things work in specific domains
A realism about where specific parts of the most important work will be done
It seems increasingly likely that the majority of cutting edge AI work will be done in the USA, and as the midgame is here—this seems less likely to change
Yes—valuable safety research can be done outside the US, but proximity to the bay area seems likely to help
Yes—it does feel unfair that certain work requires being in certain countries. This unfairness is made even more unfair by uneven immigration laws
Despite the unfairness of the situation, I think it is important groups have a clear plan about how their efforts actually lead to people working on the most important problems
There are other important pieces of work which are less geographically bound (and as such, might be a promising comparative advantages for local groups)
What worries me about these changes
Having an EA community gives people some sort of social permission to take important ideas seriously—e.g., someone’s first retreat is often disproportionately impactful. There is a risk of over-correcting and neglecting community aspects of EA groups.
The community collectively plays some sort of filtering role, of moving the right people to the right opportunity—and reducing the focus on the community could reduce the ability to do this role. However the current filtering is significantly flawed, as it also filters on “people who like to hang out with EAs.”
HT: To the CEA Groups team, for their comments and ideas
- 19 Jun 2023 16:37 UTC; 10 points) 's comment on Effective altruism organizations should avoid using “polarizing techniques” by (
- 19 Jul 2023 16:54 UTC; 3 points) 's comment on CEA: still doing CEA things by (
You say community building, but the specifics you describe seem more like recruiting and outreach. All three of those can be good things, but I think conflating them is unhelpful. I think this is especially true because EA is already very aggressive at recruiting and mediocre at post-recruitment support.
I think that’s the first time I’ve seen this written as clearly as here, and I don’t really like it or agree. My impression is that there are many people attracted to EA not because of AIS, who also won’t become interested in AIS/aren’t the right fit for that field. If the money for community building comes mainly from an interest to attract more people into AIS (as it sounds here), and is mainly intended for that, why keep funding EA in general? I would welcome more nuanced portrayals what EA community building aims to support, like facilitating other types of longtermist career changes, creating an intellectual community motivated by similar moral goals, and supporting people who have changed their careers to stick with their paths.
On the last, and in line with what Elisabeth pointed to: I also get the impression that you forget to mention the value of community for keeping strong values, and sticking to your plan. Especially if you move in a work culture that incentivizes very different values than what EAs tend to value. Having a community of like-minded people with similar core values is important for those who won’t change careers anymore, but want to stick to the highly impactful ones they have chosen to pursue. The value of community to them comes from helping them stick to their path.
Apologies, I think I should be clear that when I say “the messaging changed” I’m just describing what I believed happened, not that I think it was a good thing. I agree that some people aren’t interested in AIS, or aren’t the right fit, but can still make the world substantially better. I do however think that we should openly say “we think AIS is an important cause area” and should spend less time arguing why that isn’t a weird thing to think.
I agree that this is a value of community building, but it seems similarly relevant for explicitly longtermist community building and broad EA community building?
1 - All good, and sorry for my late reply :) I think I understand better what you meant now.
2 - I’d agree, yes.
I agree that EA community building could be a good option for some subset of people who want to ensure that AI goes well.
There are some people who are well-positioned to do EA community building, but lack the skills to contribute towards AI governance or technical community building. Actually, I would go further and say that a much broader set of people would be suited for EA community building rather than anything AI safety specific.
That said, there are some other options you should consider too. If AI safety is what you care about and you don’t have sufficient AI safety or governance knowledge to work on it directly, you may want to consider doing either x-risk or longtermist community building to narrow the focus. On the other hand, it’s also very important to consider the interests of those in your area.
Additionally, you may also want to consider if you could have a greater impact by volunteering to provide ops support to someone working on AI safety movement building. That said, this requires you to be highly motivated and reliable—something is much harder than it seems—otherwise your impact might be minimal.
Being a highly motivated, reliable, and intelligent volunteer seems pretty underestimated as potential impact. With taking a salary position, your impact is something close to your superiority to the other person who might have your position. It’s easy to imagine competent employees having a negative counterfactual impact due to displacing someone better.
On the other hand, if you are a reliable, motivated, intelligent volunteer, you are simply providing excellent resources. Volunteering for promising projects that fall in the funding cracks could be quite EV for those without financial means to help important projects.
But I would not recommend such volunteering unless you are serious… It’s very easy to have negative value to an organization as a volunteer if you take from the time and resources from an organization and leaving shortly after, or stick around without actually doing much to help.
Thanks for this. I think it could’ve been more awesome by having a stronger statement on the importance of the EA ideas, values, and mindsets. I recognize that you somewhat mention this under reasons 2 and 5 but I would’ve liked to see it stated even more strongly.