Why EA Community building

I have heard people who are uncertain about whether EA community building is the right move for them, given the increased prominence of AI Safety. I think that EA community building is the right choice for a significant number of people, and wanted to lay out why I believe this.

AI Safety Community building seems important

I’m excited to see AI Safety specific community building and I hope it continues to grow. This piece is not intended to claim that no-one should be working on AIS community building. Although CEA’s groups team is at an EA organisation, not an AI-safety organisation. I hope we can collaborate with AI Safety groups, as:

  • It would likely benefit both parties to synch on issues like data collection

  • I think there are lessons learned from EA community building that would be relevant and valuable to share

The reasons that I think the case for AI Safety community building is strong, are:

  • If we want people to work in AI Safety, directly talking about AI Safety seems the most straightforward way to do this

  • There are talented people who will find the AI Safety framing attractive, but would not like the EA framing

  • Early AIS community building efforts have managed to attract significant numbers of talented individuals (although I don’t think it’s inevitable that these early wins will scale, or successfully avoid causing accidental harm)

EA community building is also important

I think EA community building is still very valuable, for five reasons

  1. EA groups have been successful

    1. In the 2020 EA and Longtermist survey, local groups were mentioned by 42% of respondents

  2. I care about EA values in decision makers during crunch time. E.g., I think people in the EA movement have thought unusually deeply about what catastrophes would and wouldn’t lead to the loss of humanity’s future potential

  3. Having a compelling answer to the question “how do I do the most good” or “how do I live a good life” has been something that has historically attracted a lot of talent, and talent that would not necessarily have been attracted by AI Safety (conversely I expect AI Safety groups to attract people who wouldn’t be drawn to discussions of “how do I do the most good”)

  4. Specific, talented, organisers can be a better fit for either AIS or EA, and I want both options to exist

    1. I think for both options to exist, both options need to have great organisers. If all of the best organisers went for a single option, I think the other option would either become irrelevant, or cease to exist.

  5. It seems important to note that it is still possible that the risks from AI don’t manifest in the way that EAs widely expect, in which case we’ll be glad that we have a network of people that care about EA ideas

I want to see collaboration between EA and AIS community building

Although this isn’t the reason I’d like to see AIS CB, there is an extra benefit: I think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)

Since the FTX crisis there has been increasing discussion about trustingness amongst EAs. Although I think the FTX crisis could have happened in less trusting communities (e.g., Many VCs also lost money in FTX) - I think it is true that there are areas where high trust is harmful. I think operating in an environment where EAs aren’t a super-majority would improve certain processes that currently overly rely on trust. Additionally I think having EA form a part of your identity can cause in-group effects, where ideas from the outgroup aren’t taken seriously enough. I suspect this would be lessened if people identifying as EA didn’t form a majority

Based on the above, EA community building should update some of the ways in which it works

I think this has some updates on how EA groups should operate (this was written with city and national groups in mind, but parts are more widely applicable)

At the top of funnel:

  • Targeting outreach: If there are also AI Safety groups operating in the same area as you, the counterfactual impact of attracting someone who is in the AI Safety groups target audience is lower. Nonetheless it remains (importantly) true that people other than machine learning experts can contribute to the world’s most pressing problems

  • Less “defensive” messaging—as EA moved from global health, to AI safety, the core EA principles remained the same, but the messaging changed. There was a need to show that these ideas aren’t “too weird”. As AI Safety is normalised, EA messaging should be more similar to how it was when EA was principally about global health interventions.

    Note that less defensive messaging doesn’t mean jumping straight to AI. I think it’s important to

    1. Be serious about trying to work out what the most important thing is

    2. Be open, early, that AIS is the best guess of many people right now

    3. Don’t forget that AIS is not be right for everyone, and also we might be wrong about AIS

  • In a world where lots of people working on the most important problems don’t identify as EA, I believe that EA groups should, on the margin, have a greater focus on EA ideas, relative to EA community. I believe that community is important (see importance of personal connections here), and most groups that have produced a lot of value have both a focus on ideas AND a strong community. However I think (a) it is more common to over-focus, rather than under-focus on community, (b) a community built around the discussion of ideas is likely to attract people who can improve the world. Concretely, focussing on EA ideas might mean: Reducing the number of socials and increasing the number of fellowships, learning projects, discussions (importantly, there can be social elements to these)

In the rest of the funnel (with much lower confidence)

  • More partnerships/​alliances

    • Partnerships can be a pipeline into discovering EA ideas, as well as a pipeline to important positions

    • Strengthens the idea of having people working on the most important problems rather than in EA institutions

    • I think EAs significantly outperform other communities when looking at bets on what are important issues for humanity (stemming from scout mindset and openness), and modestly outperform most other communities at forecasting. However I don’t think EAs have across the board good-judgement—and would benefit from partnering with people who have a good understanding of how things work in specific domains

  • A realism about where specific parts of the most important work will be done

    • It seems increasingly likely that the majority of cutting edge AI work will be done in the USA, and as the midgame is here—this seems less likely to change

      • Yes—valuable safety research can be done outside the US, but proximity to the bay area seems likely to help

      • Yes—it does feel unfair that certain work requires being in certain countries. This unfairness is made even more unfair by uneven immigration laws

      • Despite the unfairness of the situation, I think it is important groups have a clear plan about how their efforts actually lead to people working on the most important problems

    • There are other important pieces of work which are less geographically bound (and as such, might be a promising comparative advantages for local groups)

What worries me about these changes

  • Having an EA community gives people some sort of social permission to take important ideas seriously—e.g., someone’s first retreat is often disproportionately impactful. There is a risk of over-correcting and neglecting community aspects of EA groups.

  • The community collectively plays some sort of filtering role, of moving the right people to the right opportunity—and reducing the focus on the community could reduce the ability to do this role. However the current filtering is significantly flawed, as it also filters on “people who like to hang out with EAs.”

HT: To the CEA Groups team, for their comments and ideas