7 learnings from 3 years running a corporate EA Group
Thanks to my co-organizers, Will Hastings, John Yan, Denisse Sevastian and Farhan Azam for their input and feedback on this post
3 years ago, I founded the Effective Altruism @ Meta corporate interest group. We’ve had middling success in drumming up interest for EA at the company, and thought it may be useful to share some of our learnings for others doing the same thing. I wouldn’t assume that these learnings necessarily generalize to other corporate settings, but I’m eager to hear from others on what does and doesn’t resonate.
1. Finding committed organizers is difficult in a corporate setting
In a higher intensity environment like Meta, people don’t have much time to commit to activities outside of their core responsibilities. We cycled through 8-10 organizers before we finally found a more committed group of 3-4. Most other organizers flaked and eventually stopped attending events altogether.
Beyond being busy with their day jobs, there are a few other reasons I think people consistently flaked: first, unlike a city or university community, there is no additional personal incentive to volunteer your time. Individuals are not in need of financial compensation, and holding volunteer positions does not meaningfully further their careers. Secondly, given organizers are not colocated and have varied time zones, it can be hard to find consistent times for organizers to interact.
Ultimately, the organizers that stuck around seemed to be motivated by two things. First, most were independently excited about effective altruism and inherently motivated by the altruistic goals of the group. Secondly, they were more integrated with the social elements of our group: hosting events and attending meetings was their primary way to connect with other EAs.
2. Community & discussion oriented activities are probably high ROI
Fairly early on in our work at Meta we started a bi-weekly discussion group. We would host a poll for discussion topics a few days before, and had an open meeting to discuss the top-voted item for 45 minutes. Actual attendance for these events were low—rarely more than 10 attendees per session—but through regular attendees we eventually found consistent organizers and several individuals that took more serious EA actions. Many of these individuals took donation pledges, several helped us organize events and raise money, attended EAGs, and some considered or are taking career pivots. It’s obviously hard to assess counterfactual impact, but given that for most of these individuals this was their primary interaction with other EAs and EA ideas, it seems likely that their further EA actions were attributable to us.
The actual investment in setting up this regular social community where we discussed EA ideas was negligible. We crowd-sourced discussion topics or pulled from personal reading or existing databases, and advertised the meeting once every two weeks. By that measure, this was possibly the highest ROI activity we took.
3. Speaker events are probably low ROI
Most of our tactical efforts were online events targeted at exposing Meta employees to EA ideas or effective giving principles. Our very first event had Peter Singer come speak about EA principles, and afterwards we had speakers from ACE, GiveDirectly, Google Brain, Giving What We Can, 1 For the World, and a slew of other EA-aligned organizations. Events were meaningful amounts of work to set up, as they involved internal approvals, event logistics, and internal advertising to ensure attendance.
Despite the medium-high lift to run an event, we became increasingly skeptical that the events were high impact. Actual event attendance numbers were low: smaller events would sometimes result in less than 5 attendees, including EA@Meta organizers, and even our biggest name speakers like Peter Singer would cap out at ~40 live attendees. All events were recorded, and we’d get a meaningful increase of viewers after the fact, but metrics suggested that most of these later viewers would not watch for long.
Most importantly, many of the new attendees to these events would not engage with further content. Even after our largest events, we would not see a meaningful uptick in attendance to our bi-weekly discussion groups or engagement on our internal posts. For events that attempted to garner donations, we would only see small, marginal donations even when the event was designed to track and explicitly encourage giving.
We had one series of events that plausibly had a larger impact. In December 2021, we hosted a series of events around Give Month, including 8 different talks from a variety of different EA-aligned speakers. We secured 25K in charity matching dollars from an internal organization, and received matched donations from employees resulting in >50K total donated to effective charities. We relentlessly advertised and cross-posted our events in other internal groups, which led to an increase of our internal Effective Altruism group size (think Facebook group membership) by ~100%. Although these seem like positive signals, 50K total is a fairly marginal increase at the scale of big tech salaries, and the increase in new group members did not lead to a noticeable uptick in engagement or attendance for future events. This also all came at the cost of a fairly large time and effort investment from our organizers and external speakers.
Overall, if our events were having a large impact relative to the investment, whether through increasing awareness of EA ideas or large ongoing increases in donations, we did not receive meaningful signals that this was the case.
4. Many EA software engineers do not feel like there are good direct work opportunities
Amongst our members, all but the most committed software engineering EAs did not feel that there were good opportunities for them to contribute to direct work. This seems to be due to a large mix of factors. Hear were some of the reasons folks mentioned:
Financial considerations, especially for those that own a home or have a family, as moving to EA orgs normally involved a hefty pay cut
Not being enthusiastic or having the right skills to work on AI, and feeling most software engineering EA jobs were in this space
Not being persuaded that the opportunity cost of dramatically reducing donations compared to their potential value over replacement in an EA job
Not feeling “EA” enough to work in an EA org (despite donating large percentages of their salary to EA charities)
Not being able to get (or feeling sufficiently skilled) to get a job in EA
Simply not being aware these opportunities existed
5. Employees tended to be more skeptical about longtermism compared to the average EA
Extra note about epistemic status: sample size here is very small, largely based on discussion group attendees (i.e. 20-30 unique individuals). Additionally, there are obvious biases resulting from viewpoints of prominent group leaders, chosen discussion topics, etc. With that said, this disparity in viewpoints between my outside-Meta EA community and my internal-Meta EA community was sufficiently notable that I thought worth sharing.
In my normal EA circles interacting with folks in a large city group (NYC), I anecdotally find that most individuals (~90%) are sympathetic to at least a weaker version of longtermism. Amongst our regular attendees at EA@Meta events, however, I estimate that our discussion groups were closer to 50⁄50, and many of those more sympathetic to longtermism were often the same individuals that had independent connections to the EA community outside of Meta.
Based on what I heard in discussion groups, the core skepticisms around longtermism were related to its epistemic foundations. In contrast to classic GiveWell interventions, group members seemed wary of investing in more speculative existential risk scenarios where empirical data was limited, and even more skeptical of interventions that did not have rigorous empirical methodology backing them up or effective feedback loops.
A few theories for why these concerns may be more prevalent amongst our group attendees:
Meta employees work in a space where empirical testing is highly prioritized; almost every major feature or change launched goes through a round of A/B testing and statistical analysis
In general, employees that have mostly spent time in corporate spaces are more likely to have skepticism around qualitative, academic arguments that haven’t been demonstrated to reflect truth “in the real world”
Most employees were donating regularly, and wanted to feel that their contributions to EA were meaningful, whereas there was a sense that the longtermist shifts in EA considered their contributions unimportant (at least comparatively), which rubbed them the wrong way
Similarly, feeling the longtermist shifts in the community had meant there was less space for them to contribute if there wasn’t an obvious path to change careers for them (see above)
Simply having less exposure to mainstream EA ideas by virtue of not spending time in EA circles outside of Meta
6. Influencing key decision making is hard, even from within the organization
One of our theories of change within EA@Meta was that we could influence corporate decision making. For example, we theorized that we may be able to nudge internal AI teams to think more about safety, or push Facebook’s fundraising tools to consider efficacy.
Our first project in this space is trying to convince our internal Social Impact teams to consider charitable impact within the Facebook fundraising tool. When you create a fundraiser within Facebook (as many folks do for their birthday), it recommends a set of charities to pick. There is a ranking model underlying which charities are recommended, and we were trying to figure out if we could incorporate charitable efficacy either through the UX or directly through the ranking model.
We ran into a few problems trying to make such a change. First of all, none of our immediate volunteers actually worked on the product—there was a team that worked full time on the fundraising tool, and we would need their support and (more importantly) their consent to ship any changes. When we interacted more with this team, we quickly learnt that their topline metrics—what their leaders were concerned about, and what individuals were assessed on—were metrics like charitable dollars moved and number of total donors. This was framed under the context of “building a community of donors”. Incorporating efficacy too strongly within the product could plausibly harm these metrics.
Although we eventually found sympathetic individual contributors within the Social Impact team, including some who were even familiar with EA ideas, I remain skeptical that we can ship any code that would have a large negative impact on their topline metrics. We have some ideas for how we could tweak the experience to incorporate efficacy more without major costs to the team’s core metrics, but these changes seem much more on the margin than what we had originally envisioned.
Ultimately, I worry that other attempts at influencing large organizations will encounter similar roadblocks. Senior leaders in organizations are compelled to focus on a specific, EA-unaligned vision, and they in turn pressure their teams to optimize for the corresponding metrics. Even if there is manpower and sympathy at more junior levels to update systems, these top-down pressures will make it very difficult.
7. There are promising areas within corporate organizing that deserve more exploration
There are a few areas I would be keen to explore more if I hadn’t left Meta, and I hope that the remaining organizers continue to try. These are all low-confidence ideas, but I wanted to list them quickly out here in case they may spark ideas in others:
Build relationships with senior executives: We could potentially dodge some of the difficulties around internal decision making if we had senior allies. At Meta, we know there are some very senior individuals who are sympathetic to EA. Our previous attempts to build these connections had been unsuccessful as most potential executive sponsors told us they were simply too busy, even when our requests were small (“sponsor” an event, help us secure some charitable dollars, etc.). Potentially, a more concerted effort here could bear fruit.
Focus on asynchronous engagement: One comparative advantage that workplace groups have versus university or city groups is they have access to workplace communication channels. Re-posting a newsletter, external opportunities or other interesting EA content is low effort, but fairly high return in that it could reach high potential but un-engaged future EAs. Our internal Meta@EA group had ~400 members, and posts would often be seen by >100 individuals. We may have under-invested here and didn’t experiment enough with content despite potentially high ROI.
Focus on social activities: Given varying locations (our organizers were in a mix of SF, Seattle, NYC, Zurich, London, etc) we always felt it was difficult to organize social activities, especially given high burnout on online activities post quarantine. However, I think a lot of our successes can be attributed to our existing “social” activities like our bi-weekly discussion group, so trying to identify what social events individuals will actually want to attend seems promising. Just trying to funnel individuals into local EA city events could be one strategy here.
Building an external community for corporate organizers: We definitely felt pretty isolated as corporate organizers. Although we had interacted with some folks from other tech companies, these meetings were often sporadic and unstructured. There are already folks working on this, and I could see these sorts of activities bolstering organizer retention in addition to sharing useful learnings.
Support for corporate organizers: I know both HIP and CEA are thinking about how to support corporate organizers, although it’s unclear to me what is most useful here. Conventional support mechanisms (money, formal positions) are not interesting to corporate organizers and many of the existing resources for university or city groups work just fine for corporate groups. Some potential areas that may be helpful: facilitating the external community (see above), helping think about measurement (measuring impact is super tough, I assume others have useful expertise here), building programs for professionals that give them some credentialism (e.g.,build an AI Safety program that folks might see as furthering their career) or simply legitimizing the groups somehow (may help organizer retention).
These tips felt relevant to my chats with teammates at Electronic Arts who participate in social impact.
A couple of highlights resonated with me:
Just wanna add more detail to this one.
I’d say those that took the donation pledges, attended EAGs, and considered career pivots we’re already part of the core organizers. I’d classify them (us) as already predisposed to EA, which slightly reduces the counterfactual impact vs just laypeople being persuaded this strongly. Still mostly counterfactual, by the establishing of community and shared purpose at Meta
The donations, however, were fully counterfactual imo. None of the core organizers donated to the drive since they were already going to donate and didn’t want to distort the results in measuring impact
To push back a little: I don’t think this is true for all success stories, and although this is true for some core organizers, many eventual organizers were participants in our discussion group/community events first, and then eventually became organizers (which in turn resulted in the outcomes described).
You’re definitely correct that some of the folks though are definitively NOT counterfactual (e.g. Will, you, me) and were already taking EA taking actions without the group’s influence.
But there’s the potential greater impact that’s harder to measure:
All of the people donation were not part of the core organizers and were likely people recently introduced to EA, and influenced enough by the ideas to donate. It’s possible this leads to future repeated donations (one person has mentioned this intention so far), and engagement with EA
The increased group membership could pay off in the future, though no noticeable improvements now. Having double the eyes on all the future promotions of EA content we post might pay off in terms of increased engagement or donations. Possibly it might already have—ie the rate of newly engaged faces doesn’t seem to have changed, but maybe the return to in-person events has made our exclusively virtual events less appealing, and without give month the new-person rate would’ve gone down
Definitely agreed!
To clarify, I don’t think that events are low impact—they may very well have the harder-to-measure forms of impact you’re describing here!
Mostly I’m trying to draw a comparison to some of our activities, like our discussion group, which were lower effort to setup and had clear, measurable positive outcomes.