Learning from non-EAs who seek to do good

Is EA a question, or a community based around ideology?

After a year of close interaction with Effective Altruism – and recognizing that the movement is made up of many people with different views – I’m still confused as to whether EA aims to be a question about doing good effectively, or a community based around ideology.

In my experience, it’s largely been the latter, but many EAs have expressed – either explicitly or implicitly – that they’d like it to be the former. I see this in the frequent citations of “EA is a question (not an ideology)” and the idea of the scout mindset; and most recently, in a lot of the top comments in the post on suggestions for community changes.

As an EA-adjacent individual, I think the single most important thing the EA community could do to become more of a question, rather than an ideology, is to take concrete steps to interact more with, learn from, and collaborate with people outside of EA who seek to do good, without necessarily aiming to bring them into the community.

I was a Fellow with Vox’s Future Perfect section last year, and moved to the Bay Area in part to learn more about EA. I want to thank the EA community for letting me spend time in your spaces and learn from your ideas; my view of the world has definitely broadened over the past year, and I hope to continue to be engaged with the community.

But EA has never been, and I don’t see it ever becoming, my primary community. The EA community and I have differences in interests, culture, and communication styles, and that’s okay on both ends. As this comment says, the core EA community is not a good fit for everyone!

A bit more about me. After college, I worked with IDinsight, a global development data analysis and advisory firm that has collaborated with GiveWell. Then I wrote for Future Perfect, focusing on global development, agriculture, and climate. I care a lot about lowercase effective altruism – trying to make the world better in an effective way – and evidence-based decision making. Some specific ways in which I differ from the average highly-engaged EA (although my, or their, priors could change) is that I’m more sympathetic to non-utilitarian ethical theories (and religion), more sympathetic to person-affecting views, and more skeptical that we can predict how our actions now will impact the long-term future.

My personal experience with the EA community has largely been that it’s a community based around an ideology, rather than a question. I probably disagree with both some critics and some EAs in that I don’t think being a community based around ideology is necessarily bad. I come from a religious background, and while I have a complicated relationship with religion, I have a lot of close friends and family members who are religious, and I have a lot of respect for ideology-based communities in many ways. It’s helpful for me to know what their ideology is, as I know going into discussions where we’ll likely differ and where we’ll potentially find common ground.

If EA aims to be a community based around ideology, I don’t think much has to change; and the only request I’d have is that EA leadership and general discourse more explicitly position the community this way. It’s frustrating and confusing to interact with a community that publicly claims the importance of moral uncertainty, and then have your ideas dismissed when you’re not a total utilitarian or strong longtermist.

That said, a lot of EAs have expressed that they do not want to be a community based around ideology. I appreciated the post about EA disillusionment and agree with some of the recent critical posts around the community for women, but this post is not about the community itself.

Why it’s important for “EA as a question” for the EA community to engage with outside people and ideas

If EA truly aims to be a question around aiming to “do the most good”, I think the thing the EA community needs to do most is learn from and work with the myriad people who care about doing good, and doing it effectively, but for whom EA will never be a primary community. Lots of the people EAs can learn from are probably less “EA” than I am – some examples (although of course people with these identities may also identify as EA) are people living outside of EA city or country hubs, people based in the Global South; people who are older; people with children; and people who have years of career experience completely outside the EA ecosystem.

None of the following means I think that the EA community should cease to exist: it’s a unique place in which I think a lot of people have found great community.

But there’s a difference between “EA the community” and (lowercase) “effective altruism the project”. The main tension I have observed, although this is based on anecdotes and conversations and I could be mistaken here, is that the EA community’s insularity – in which cause prioritization, hiring, social life, funding, and more are all interconnected (as discussed in this post) – is hindering lowercase effective altruism, because it means that a lot of people who would add valuable ideas but aren’t engaged with EA the community aren’t at the professional table, either.

Some of the key groups I’ve thought of that are less involved in the EA community but would likely provide valuable perspective are policymakers on local and national levels (including policymakers from the Global South), people with years of expertise in the fields EA works in, and people who are most affected by EA-backed programs. But there are also ways of doing good with which I’m less familiar. I’m still EA-adjacent enough that there’s much I would be missing, which is one reason it’s important to have more diverse networks!

Interacting more with people from these backgrounds would bring perspectives in doing good that people who identify as EAs – largely white, male, and young, with a heavy focus on STEM and utilitarian philosophy – are missing. CEA recognizes the importance of diversity and inclusion towards attracting talented people and not missing out on important perspectives. Beyond this, some concrete problems driven by homogeneity that have been recently brought up on the forum are a lack of good organizational governance and limited ability to “to design and deliver effective, innovative solutions to the world’s most pressing problems”.

How EA can engage with people outside of the community

Here are some of my concrete suggestions for how different groups within EA can engage with people outside of EA without the aim of bringing them into the community. This is not comprehensive, and I’ve seen many of them discussed elsewhere (I like this comment from another EA outsider); and I note specific positive examples I have seen. I’ve put these suggestions into three broad and imperfect categories of events, ideas, and professional actions.

Events

  • EA Global accept and actively recruit people, especially from the Global South, who are experts in different fields that are aligned with lowercase effective altruist goals – for example, evidence-based decision-making and ensuring cost-effectiveness of interventions. I found EA Global DC to be making good headway with this with respect to people in the US policy space, but this could expand, which leads to my next point.

    • Most people in the world, now and in the future, don’t live in the US and Europe. I have seen some good efforts on Global South recruitment for students and early-career professionals for the EA community (such as in India and the Philippines), but I would recommend going beyond even that – bringing in people who won’t become highly-engaged EAs, but could have things to teach and learn from the community. One group I had a discussion about was IAS officers (Indian Civil Service) – they and EA could both benefit from discussions on bringing evidence into policymaking.

    • This sort of engagement would almost certainly involve more EA-related discussion in languages other than English, and it’s been exciting to see traction in the community on this recently.

  • EA groups cohost more joint community meetups. I’ve seen this happen with Bay Area YIMBY, I’d love to see more of these with other communities with overlapping aims. This might also help fulfill the goal of increasing diversity within the EA community if some attendees want to become highly-involved EAs.

  • EA organizations engage with the ideas and preferences of people impacted by EA programs, such as GiveWell and IDinsight’s collaboration on measuring people’s preferences. Given EA (and my) elite background this might be harder than engaging, for example, officers in the Indian Civil Service, but I would love to see efforts to include the majority of the world in decisionmaking about issues that will affect the whole world. It would be great if EA orgs could include these perspectives into both program decisions and cause prioritization.

Ideas

  • I think it’s important that EAs within the core community discuss ideas with/​listen to people from outside the EA community. EAs in conversation with non-EAs often employ the scout mindset, and I think in general EAs are curious and open to learning and coming to synthesis. But I’ve sometimes found conversations around areas in which I disagree with “EA orthodoxy” frustrating; in some cases, my ideas have been seemingly dismissed off the bat and the conversation has ended up with appeals to authority, which is both alienating and bad epistemics.

  • EAs engage with ideas and critiques outside of the EA ecosystem. This could be through interpersonal interactions; this could be through – especially for new EAs for whom there can be an overwhelming amount of EA-specific content – continuing to engage with philosophy, social science, and other ideas outside of EA; this could be through inviting non-EAs to speak at EA Global or on EA podcasts. Lowercase effective altruism can only be made stronger through reading and engaging with other ideas, even if (and probably especially when) they challenge EA orthodoxy.

Professional actions

  • EA (orgs, and core community at large) de-emphasize EA orthodoxy and trying to find the singular best way to do good, instead bringing in things like evidence-based decisionmaking to all fields. This is maybe a general ideological difference I have with EA that would merit its own post, but I think cause neutrality taken to its extreme (especially given we all have imperfect information when trying to prioritize causes) can be alienating to people who want to do good effectively, but whose idea of doing good isn’t on the 80000 hrs list. Some organizations like Open Phil and Founders Pledge are great at looking across fields. But general community emphases that make it seem like AI, for example, is central to EA, mean that non-AI people might think that lowercase effective altruism is not for them either – when it might be!

  • EA orgs fund and collaborate with non-EA orgs that want to improve the world in a cost-effective way. Grantmakers should explicitly seek, if possible, organizations and cause areas outside the EA ecosystem. I’ve been excited to see Open Phil’s request for new cause area suggestions, such as the move into South Asian air quality.

  • EA orgs take concrete steps to hire non-EAs. (I think there was a post on this recently but unfortunately I can’t find it). People with decades of experience in topics as diverse as management, policymaking, scientific research, etc, could add a lot to EA organizations without having to be involved in the community at all. A concrete step EA orgs could take is removing EA ideology words from job descriptions and instead defining for themselves the core principles of EA that are important for the jobs they want to hire for, to ensure they can resonate beyond just people who identify as EAs.

    • From a personal perspective, I want to say that EA and EA-adjacent orgs have been very open to me working for them, despite (or because of) my being open about my perspectives, and I want to thank them for this. That said, I only started getting recruited once I worked for Future Perfect and started to go to EA events, and I think a lot of better people than me are being missed out on because they don’t know what EA is, EA doesn’t know who they are, or they think EA jobs are not for them. I know recruiting from outside of networks is difficult and that essentially every sector has this problem, but there is much potential increased impact from hiring outside the community.

If EA aims to be a question, I think there’s a way forward in which EA continues to be a unique community, but one that learns from and engages with the non-EA community a lot more; and we can work to do good better together.

Many thanks to Ishita Batra, Obasi Shaw, and others for their comments on this draft; and many thanks to the many people both within and without the EA community I’ve discussed these ideas with over the course of the last year.