How Can Each Cause Area in EA Become Well-Represented?
Summary: There is disagreement over what is the appropriate proportion of different kinds of resources different cause areas in EA receive in addition to funding, such as how much public events, materials, and resources in EA will feature different causes. I think a lack of clarity in these community discussions stems from a lack of clarity about what measures and metrics are most important, and a lack of awareness about all the different and unique ways different causes have flourished in EA as an ecosystem. In the last couple years, several causes have innovated many ways to self-develop and mobilize resources under the umbrella of EA. This is an alternative to different cause areas in EA competing for apparently scarce resources provided through a single source like the Centre for Effective Altruism. I think overall EA can extend lessons learned from how individual cause areas in EA have self-developed over the last few years, and with creative problem-solving, can realize a potential for all clusters of EA to become more self-sustaining.
Disagreements Over Representation in EA
One issue in EA is how much one cause area should be prioritized is framed. In a talk at EAG London 2018, Ben Garfinkel of the Future of Humanity Institute asked: “How sure are we about this AI stuff?” (transcript). In his talk, Ben went over the ways AI safety, alignment, and governance are extremely neglected:
Depending on how you count, there are probably fewer than a hundred people in the world working on technical safety issues or governance challenges with an eye towards very long-term impacts. And that’s just truly, very surprisingly small. The overall point though, is that the exact size of the bet that EA should make on artificial intelligence, sort of the size of the portfolio that AI should take up will depend on the strength of the arguments for focusing on AI. And most of those arguments still just aren’t very fleshed out yet.
Internal criticism of a disproportionate focus on AI-related issues in EA isn’t along these lines. The most common criticism is along the lines of how with some of the most public-facing EA resources and events, AI-related issues receive attention out of proportion with the quantity of the EA community who considers AI-related causes the top priority. For example, in the past people have pointed out to how organizational representation, the proportion of events and talks, and the overall theme of some EAG events is focused much more on AI than much of the movement prefers. Another example is how the original version of the EA Handbook 2.0 was perceived as being disproportionately about AI, with the topic receiving 3 chapters, while any other cause area received a single chapter in the handbook, in spite of a cause area like global poverty alleviation historically being as much or more of a priority than AI among EA community members.
The Centre for Effective Altruism has addressed these problems. For EAG, the CEA has expanded the diversity of breadth of EAG events. Some of them in a given year will have a theme of the ‘long-term future’ or ‘emerging technology,’ in part to serve the important function of a conference for those working on AI-related issues under the umbrella of EA, while others focused on different causes aren’t deprived of the same opportunity. Individual controversies are resolved, but pattern stems from how organizations at the heart of EA like the CEA are primarily responsible for publicly representing EA, while themselves prioritizing causes related to AI and the long-term future. From the CEA’s perspective, it doesn’t make sense to put too much of their efforts into producing resources not aligned with their own priorities. At the same time, it doesn’t make sense for all those involved in EA, organizations and individuals, to continually divert a portion of their efforts into the common pool of participation in EA if they aren’t being represented.
While in his talk Ben points out how even within EA, AI-related causes receive funding disproportionately low relative to the (perceived) importance of the relevant problems, concerns about AI-related causes receiving disproportionate prioritization in other ways aren’t assuaged. While at present the proportion of resources other causes receive may not appear at risk, the proportion of attention different causes receive today influences what other resources they’ll receive in the future. If public-facing community resources or events like EAG disproportionately feature AI as a priority, in driving growth they could disproportionately attract newcomers interested in AI, and wrongly repel others, based on giving a false impression of how much the community prioritizes AI relative to everything else. In time, this could shape the EA community such that it’s focused on AI-related issues more than so many of the people who built and sustain the community expected.
The Need to Balance Different Interests
As important as AI alignment is, the professional fields focused on it need a home. Since AI safety and related fields have been developed in recent years largely because of contributions from EA and similar communities, it’s natural they’d find a home in EA. Since so much of AI safety and related fields are concentrated in EA, it’s hard for them to find representation and support in other kinds of institutions like universities. While more universities are instituting research centres dedicated to or focused on causes like AI governance, these fields aren’t big enough in academia yet to have professional academic associations that would do the work of providing resources and organizing conferences on their behalf. Current conferences solely focused on AI safety or beneficial AGI are organized by NPOs focused on the issues, with an existing relationship with the EA community, like the Future of Life Institute. Aside from these other organizations, AI safety and related causes have their public representation and organization most concentrated within EA.
This is in contrast to other cause areas in EA, such as global poverty alleviation and farm animal welfare, which stem from movements or trends in broader society long predating the EA movement. Even if we exclude all efforts focused on these other causes outside the EA community we would not involve ourselves in, there is a long history of large social networks of organizations and groups dedicated to solving these problems. The same isn’t true for AI-related causes in EA. At the same time, a lot of community members prioritizing other causes come to EA for a focus on effectiveness, evidence-based reasoning, and cause prioritization the community uniquely offers. If they felt they weren’t welcome in EA, it’s not as though their projects would obviously find support among another community or movement. Many organizations focused on many different causes have been founded with much support or inspiration from EA as a movement. Many more organizations founded wholly independent from EA have been able to accomplish so much they wouldn’t have been able to without support from EA, such as grants from the Open Philanthropy Project. It’s hard to say all these organizations would have achieved as much good as they have were it not for the EA community.
So while AI alignment doesn’t have much of a home outside of EA, a lot of EA efforts in cause areas that receive more attention in society more broadly would nonetheless also have a hard time finding a home outside the EA community. Another concern is perhaps EA should be regarded as a whole greater than the sum of its constituent causes, and as a movement should still be focused on encouraging effectiveness in all kinds of efforts to do good. For a lot of community members, attendance at events like EAG is their primary exposure to the EA community, if they’re not as connected online or through a local EA group.
Attendance at a single EAG event could give someone a false impression of what currents of thought are most trending in the EA community. One could easily walk away thinking job opportunities, giving opportunities, and EA’s potential are focused way more on a single cause like AI safety than they actually are. The reality is many professional fields and communities have a need for central conferences and organizations, and EA naturally plays host for those it prioritizes that receive less support outside the movement. I know EA community members who as part of their work attend conferences and network with communities aside from EA that also share their priorities, such as in other non-profit communities, or in animal welfare and advocacy movements. There are many fewer conferences for AI safety and related causes totally independent of EA.
How Different Cause Areas Are Bootstrapping Themselves
Of course as a movement, EA is free to be dynamic and decide for ourselves how to organize. In the past when disputes over representation and the role conference-style events like EAG play, I’ve suggested EA as a community allow greater room for specialization and diversification within the movement. This was a few years ago, and the reaction I got at the time was a prediction this would cause more and not less friction and fracture in the EA community. At the time, that seemed a realistic possibility, and I didn’t have a response, so I dropped the issue.
Since then, it appears much of the EA community has organized itself where specialization within a particular cause or kind of activity is highlighted. The CEA has begun hosting retreats and conference-style events for community members professionally specializing in local community-building, or non-profit operations management. While they’re not hosted under the EA brand, in the last couple years there have been more conferences supported by organizations and causes that find they’re home in EA, such as effective animal advocacy, wild animal welfare, and AI safety. All of these different events don’t appear to have caused any problems within the EA community, or individuals to identify exclusively with a single cause and not additionally as part of the EA movement. That remains a possibility. Yet so far it appears organizations aligned with EA, across a variety of cause priorities, are able to organize their corresponding communities within EA without it having negative consequences for movement overall.
While it makes sense to be cautious, as a movement EA can try self-organizing in whatever ways we think most impactful, and that includes what kinds of events we plan, and resources and materials we provide. The CEA is probably one of the few EA organizations with the capacity to oversee multiple conference-style events per year, and thus oversee a single strategy of more specialized conference-style events. Yet on the EA Forum (or elsewhere online) we can discuss and suggest possibilities for how the EA community can more effectively self-organize. The CEA or another EA organization could implement or incorporate new ideas for increasing access to quality information and collaborations within EA suggested elsewhere.
Something else often missed in conversations about representation of different cause areas in EA is it’s easier to find if one knows where to look. Someone whose first exposure to EA is attending an EAG event could come away thinking EA is mostly focused only on the causes given the spotlight at that particular EAG. Yet looking at stats for what cause areas are the highest priority among EA community members, or what proportion of donations different causes receive, there are significant numbers of people focused on each cause for lone individuals to network with. There are lots of different Facebook groups for EA community members focused on a single cause area, topic, or specialty, each with dozens if not a couple hundred members. Even for causes for which interest in has grown more slowly in EA, like wild animal welfare or mental health interventions, there are several projects and organizations that have been launched with support or inspiration from the EA community.
A lot of this info isn’t made publicly or easily accessible in events, materials, and resources provided by the EA community. This isn’t because of negligence. Much of the EA community itself may be unaware of how well-organized a diverse array of clusters in the movement are. That so much value for such different causes in EA can be created by community members coordinating with each other, without crowding the community too much, or making people compete for scare resources, means there is more opportunity to boost different causes than EA thinks.
Where to Go from Here
For many organizations, the audience the EA movement as a whole has access to is much bigger than the one they could find on their own. So how the movement grows, and how different cause areas within it are represented, is crucial to how well anyone in EA can pursue their missions or goals. Yet I think if more community members were aware of how much potential there was for many causes to flourish in EA, based on the continued and growing success of efforts favouring these different causes in the present, it would quell a lot of anxiety between different causes. There is lots of opportunity for all kinds of efforts in EA to grow as the movement itself grows.
For every cause area in EA to have its potential fulfilled will still take lots of effort. It will also take careful consideration of different options, and creative problem-solving. For example, I’m not actually confident EA organizations organizing different events for each different cause area in EA is a good idea. As an ecosystem, EA benefits overall by all community members being able to learn about and participate in discussions about all kinds of projects and interventions. Yet another possibility is the EA community or organizations could find a way to officially participate in, or send a delegation to, conferences put together by other communities or movements with common goals with EA.
This example is just a suggestion. The point is EA as a movement, and clusters within it focused on any particular cause, have the potential to self-organize any way they can think of and implement. I sense there is a feeling much of the EA community is waiting for permission to suggest new ways the movement can coordinate or mobilize resources, or is expecting some authoritative body will take care of it. The reality is the sky is the limit, and in trying to do the most good, EA doesn’t have to limit itself to exclusively conventional modes of organization just because it’s what other communities do.
Thanks for taking the time & care to write this up.
Could you expand a little more on what considerations you think are driving this?
I’m happy to write this up. One thing I think driving these considerations is a mismatch of priorities, leading people not to communicate about the right things to get on the same page. For example, central EA orgs like 80k and the CEA, with their priority on x-risk reduction, may pay most attention to helping x-risk orgs find hires. This comes out in what they do. There is nothing wrong with this, because I don’t even necessarily events like EAG have to necessarily trade-off between focusing on different causes. It’s just there is more to EA than x-risk, in terms of both supply of and demand for labour. If things like the EA FB groups directory, which includes different groups for different professional fields within EA for people to network within, are not something people working at EA organizations nor many community members at large are aware of, nobody will bring them up. So it can create a mutual impression there is less opportunity in EA for different kinds of people to work on different kinds of things there actually. A lot of this is the self-fulfilling prophecy of confidence. Believing there is room to create new opportunities in EA is the thing itself that drives people to create those opportunities. If nobody is pointing out how possible these opportunities are, nobody will believe they’re possible.
Admittedly, since I know more about the resources, to make them more accessible to the community is something I’d like. The Local EA Network (LEAN), a project of Rethink Charity (RC), has been revamping the EA Hub this year, an online portal that could make accessing these things for all EAs much easier. I don’t if the EA Hub is back up and running for anyone to access, or when that would be. This post itself were more my preliminary thoughts on how many people could better reframe disagreements within EA.
For what it’s worth, I shared the EA Facebook group directory in the last issue of the EA Newsletter, and I plan to share more resources like that as they arise (since the Newsletter is meant to be useful to a wide audience). Feel free to reach out if there’s something you think should be included in a future issue, though I can’t promise that we’ll have room for any particular link.
Yeah, that’s great. I think the next step would be to find a way to translate and integrate this info into other forms, like ensuring the info gets out at EAG, or university EA groups are made aware of them, but that’s a more complex process.
Thanks, Evan.
I think I follow you here? But the syntax could be cleaned up.
Right, in organizing events like EAG, the CEA may optimize for matching up labour supply and demand for x-risk. They may not have the capacity or know-how to do this for every cause area at every event. This could create the impression there are only jobs at x-risk orgs, or only people with the respective credentials are welcomed to EA. So the appearance EA only focuses on one or the other is due to an artificial as opposed to a real problem. So I think people are likely to blame or point fingers, when I think that misunderstand the nature of the problem, and it requires a different kind of solution.
Apologies if this is a silly question, but could you give examples of specific, concrete problems that you think this analysis is relevant to?
Recently, there has been a lot of talk about how the talent pipeline in EA is inefficiently managed. One part of this is central EA organizations have focused on developing the talent pipeline for a narrow selection of metacharities and NPOs focused on AI alignment/x-risk reduction. Developing infrastructure for the ecosystems of other cause areas in EA, to optimize their talent pipelines, could make the overall problem of talent allocation in EA less of a problem.
Cause areas within EA could develop their own materials and handbooks to circulate among supporters, and organize events or conferences that allow for better, more specialized networking than more cause-neutral events like EAG can offer.