Lessons from Running Stanford EA and SERI
Introduction and summary
Who knew a year of work could turn a 1-organizer EA group into one of the largest EA groups in the world? Especially considering that the person spearheading this growth had little experience running much of anything relevant, and very suboptimal organization skills (It’s me, welp).
I definitely didn’t, when I started running Stanford EA in mid-2019. But I did know it was worth a shot; many universities are absolutely insane opportunities for multiplying your impact—where else can you find such dense clusters of people with the values, drive, talent, time, and career flexibility to dedicate their careers to tackling the world’s most pressing, difficult, large-scale problems?
Stanford EA had effectively one real organizer (Jake McKinnon), and our only real programming was weekly discussions (which weren’t very well attended) and the one-off talk for a few years. This was the case until 2019, when Jake started prioritizing succession, spending lots of time talking to a few new intrigued-but-not-yet-highly-involved members (like me!) to get more involved about the potential impact we could have doing community building and for the world more broadly. Within a year, Stanford EA grew to be one of the largest groups in EA. That first year of work turned Stanford EA into a very large group, and in the second year since, I’ve been super proud of what our team has accomplished:
Getting SERI (the Stanford Existential Risks Initiative) off the ground (which wouldn’t have been possible without our faculty directors and Open Phil’s support), which has inspired other x-risk initiatives at Cambridge and (coming this year) at Harvard/MIT.
Running all of CEA’s Virtual Programs for their first global iterations, introducing over 500 people to key concepts in EA
Getting ~10 people to go from little knowledge of EA to being super dedicated to pursuing EA-guided careers, and boosting the networks, motivation, and knowledge of 5+ more who were already dedicated (At Stanford, and hopefully much more outside of Stanford since we ran a lot of global programming)
Running a global academic conference, together with other SERI organizers.
Running a large majority of all x-risk/longtermist internships in the world this year, together with other SERI organizers (though this is in part due to FHI being unable to run their internship this summer)
Founding the Stanford Alt. Protein Project, which recently ran a well-received, nearly 100-person class on alternative proteins, and has also set up connections/grants between three Stanford professors and the Good Food Institute to conduct alternative protein research.
Helping several other EA groups get off the ground, and running intro to EA talks and fellowships with them
I say this, not to brag, but because I think it shows several important things:
There’s an incredible amount of low-hanging fruit in this area. The payoffs to doing good community-building work are huge.
You (yes, you!) can do similar things. We’re not that special—we’re mostly an unorganized team of students who care a lot.
We still have so much to learn, but I think we got some things right. What’s the sEAcret sauce? I try to distill it in this post, as a mix of mindsets, high-level goals, and tactics.
Here’s the short version/summary:
EA groups have lots of room for growth and improvement, as evidenced by the rapid growth of Stanford EA (despite it still being very suboptimally run and lots of room for improvement!)
Useful mindsets for community building:
Take charge and responsibility: Be agentic, ambitious, conscientious, and check in with others—community building work is often public-facing, and reputational risks can be costly—first impressions are hard to shake!
Try to really figure things outs: Think critically about how we can really do the most good, and then actually do it. Instill this curiosity and drive in your group members.
Take ideas (and their implications) seriously: EA is filled with crucial considerations that drastically change how important certain things are to focus on. Lots of important ideas probably seem very weird and unintuitive. Perhaps the important part of taking ideas seriously is taking actions in accordance with the ideas you believe.
Becoming a strong friend group and team:
Because of our (Stanford EA core’s) shared values and goals, as well as lots of time we’ve spent together, there is a strong sense of us being on the same team, trying to help each other out as much as we can, and wanting to achieve amazing things together.
It’s also much more fun to do things for your group and with other organizers when you really enjoy their company!
Suggestions for how to make this happen for your group are in the lessons section—though I’d specifically recommend group chats for highly involved members, daily default (optional) meals with the group, and living together
High-level goals for your group:
A helpful proxy for doing as much counterfactual good as possible for EA groups is optimizing the number of members who are highly engaged with the EA community and pursuing career plans based on trying to do the most good possible.
Get others excited about community building; as you can communicate to them: university is a unique place to get talented people to make career changes, top priority causes are extremely neglected, and the multiplier effect on your impact can be huge.
Leverage the community (existing resources, networking, asking for help, etc) and improve it (e.g. by sharing your resources and experiences running your group)!
Create a culture of: Curiosity (how do we really do the most good?), ambition and entrepreneurship (think of amazing things and actually do them), and an engineering mindset (constantly think and act on improvements for the group).
Best Practices and Lessons:
Students are much more receptive to career-focused messaging than donation-focused messaging.
Recurring programming is much better than one-off programming for retention.
There is a lot of available funding in the meta-EA and longtermist space.
There is a lot of low hanging meta-EA fruit and you can do a lot without much prior experience.
Learn, reflect, write, and talk with people about EA (a lot)
Focus on retention and deep engagement over shallow engagement
Prioritize 1:1s and personalized programming
Getting involved should be easy and clear
Make group involvement (highly) valuable to group members
Tight-knit community and support network and team
Prioritize leadership succession
Be proactive, entrepreneurial, and experimentative
Always think about how to scale things up and be ambitious
Try to have a strong reputation, especially with your target audience
Network and communicate with others a lot
Our fairly up-to-date list of programming can be found in this post I wrote about Stanford EA’s programming over the pandemic.
Thank you to everyone who has contributed to making Stanford EA and SERI what they are today, the Stanford EA core team—and in particular Jack Ryan, Michael Byun, and Sydney von Arx, for all the incredible work you’ve done for Stanford EA and SERI over the past couple of years, Jake McKinnon for getting me motivated to prioritize community building, 80K for getting me involved in EA in the first place (and offering a lot of the best content our group uses), and everyone who commented on drafts of this post (Danny Wolfe, Michael Byun, Katherine Worden, Emma Abele, Jake McKinnon, Jessica McCurdy, and Bella Forristal).
I think EA groups could be much better than they are currently.
As I mentioned earlier, Stanford EA had effectively one real organizer until 2019. Just within the past two years Stanford EA has grown to be one of the largest EA groups in the world, and we’ve started and run most of the programming of the Stanford Existential Risks Initiative. I think that our growth is replicable, and that you do not need to be a superstar public speaker or highly experienced organizer to run a successful group (I sure wasn’t). That being said, there were definitely lots of factors in my favour. I was on a Community Building Grant so I was getting paid to spend time figuring out how to run Stanford EA well (In case it’s helpful, I’d estimate that I spent around 40 hours a week on EA). I also had the benefit of being in the Bay Area and had many amazing students and community members supporting me, while I also really enjoy learning and talking to people about EA and related ideas.
I spent the approximate 40hrs per week through a combination of organizing, learning, brainstorming, and socializing mostly through 1:1s and small group discussions while completing my Masters degree. The 80⁄20 version of my work (e.g. running a high quality intro fellowship and 1:1s with promising members) could’ve been done in 10-20 hours per week, though I found that the additional time I spent on EA community building helped me to develop many of the ideas and programming that I’m sharing in this document. This extra time spent further helped me to get many other Stanford organizers similarly excited about EA and community building.
Over the last two years running Stanford EA and the Stanford Existential Risks Initiative (SERI), I’ve learned a fair amount about running groups well. I’d like to use this document to share what has worked well for us along with best practices for EA groups in areas that require improvement across the board. I’ve written about our growth and programming during the pandemic in a previous post, so this post focuses more on the mindset with which I approach community building, high-level strategy, and concrete lessons and advice for organizers.
If I don’t do something, who will?
What has been most important for me is the mindset with which I approach community building (one that seems to be shared by many other highly engaged community builders). I see this mindset as feeling a strong sense of urgency about the state of the world and how much better it could be. Personally, I find my deepest resolve when I think about the possibility of all life on Earth ending, having all of our hopes and dreams for the future extinguished, and how much suffering already occurs—particularly in factory farms and low-income countries. I have found that engaging with texts such as Strangers Drowning and histories of slavery have been particularly powerful in cultivating and strengthening this mindset and my personal resolve. There is also an Anne Frank quote that really resonates with me:
“How wonderful it is that nobody need wait a single moment before starting to improve the world.”
I’d like to add a caveat about unilateralist actions being potentially high in negative risk and would further like to underscore that extensive communication is extremely important for community building and before doing any public-facing thing. That being said, however, the basic idea that there is a lot to be done (with sufficient collaboration and feedback) is both true and important.
For most people, thinking about how much better life could and will be if we improve our world is similarly motivating. Independent of the specific source of motivation, the mindset of “If I don’t fix or do this, no one will,” feels, on balance, true ( e.g. “the fellowship won’t happen unless I run it”, or “these students who seemed interested and are about to graduate probably won’t get more involved unless I set up 1:1s with them and encourage them to apply to jobs on the 80K job board”). This mindset also inspires me to aim high, be strategic, try really hard, and convince others to join me in this difficult but highly rewarding quest to improve our world as much as possible.
Encouraging this mindset in fellow organizers has led to our group members being proactive, experimenting with new programs each quarter, making large-scale changes to continually improve our programming, and thinking about scaling up to provide value to the global community.
Figuring Things Out: How can we really do the most good? What does the most good even mean?
Another mindset that I think is crucial for organizers to have—and one that I’m very glad is shared by core members at Stanford EA, is one of really trying to figure out how to do the most good, rather than finding a cause that we like that has been given the EA seal of approval and feeling content sticking to it. At its best, applying this mindset has looked like a handful of us reading and thinking extensively about our values and the world’s problems, and frequently discussing these topics to learn from each other, especially to learn from our friendly and constructive disagreements (e.g. “You and I both want to do as much good as we can, so—if we have pretty different ideas about how to do so—we should figure out why that is and perhaps one or both of us should change our mind,”).
Engaging with other people often—whether it’s through writing, talking or another means— also prepares you to talk about EA much better to newer members and demonstrates how much you’ve thought about these ideas and how much you care, and helps you have better thoughts on cause prioritization (how to actually improve the world, share and signal knowledge to other community members, and more).
This mindset is related to the general practice of backchaining/theory of change. Critical thinking from first principles, and more generally thinking hard about how to actually realize the changes you want to see in the world, are crucial for EAs, and group organizers in particular. Figuring out “what gets people really invested in EA/the group/pursuing the highest impact career they can?”, where there isn’t much good existing knowledge to go off of. It’s broadly useful to be able to find useful actions & do them yourself, rather than mime similar actions where you don’t have a deep understanding of why those actions would be useful. A lack of deep understanding doesn’t let you see inefficiencies, ideate how things could be better, or identify crucial considerations that might suggest certain actions are much better/worse than they initially seem.
Take Ideas (and their Implications) Seriously
Crucial considerations about population ethics, moral philosophy, moral circle expansion, and other topics (e.g. longtermism, the meat-eater problem, the mere addition paradox, long-term indirect effects) might make the signs of certain actions and causes un-intuitively flip, or at the very least, make them less clearly highly effective (and conversely, things that seem un-intuitive or weird might be extremely important—like the wellbeing of animals and future people, and the likelihood of extinction this century). Other considerations (like the funding situation in the meta-EA and longtermist space) might make certain careers (e.g. earning-to-give) much less (or more) impactful.
One important part of taking ideas seriously is acting on their implications. If A implies action B and you believe A, you should really consider doing B. Examples of Stanford EA core members taking EA ideas seriously and applying them to our lives that I’d want to see more of:
Making our career plans primarily based on EA considerations/what they think would most improve the long-term future (I think this applies to all of our core members)
Relatedly, many of our members spend a lot of time thinking about cause prioritization, population ethics, comparing between cause areas and career options, and engage deeply with ideas that seem “pretty weird” (e.g. around how transformative AI will be, (wild) animal suffering, how counterfactual meta-EA work is, and more).
Taking the implications of thought experiments/ideas about morality seriously: For example, many of us think that failing to do a very good thing (like choosing a high-impact career, donating), even if we don’t cause harm—is very bad. As a result, many core members spend a significant fraction of our time organizing for Stanford EA and SERI, learning about EA and relevant ideas, and talking to the broader community, on top of using these principles to make career plans.
Most core members are vegan (though I think diet is much less important to focus on than career when it comes to impact, but it is a concrete example). More importantly (for the animals), Michael and I co-founded the Stanford Alt. Protein project (impact of the group mentioned above).
Becoming a Strong Friend Group and Team
Becoming a strong friend group and community has also been instrumental for Stanford EA. Because of our (Stanford EA core’s) shared values and goals, there is also a strong sense of us being on the same team, trying to help each other out as much as we can, and wanting to achieve amazing things together. This is a pretty unique feeling that is hard to find elsewhere, but when it’s there it can be really powerful. For several of us, many of our best friends are fellow Stanford EA members, and knowing that our goals and values are shared strengthens our friendship and desire to help each other. These strong bonds have led to much of our success (e.g. wanting to do more for the group since it involves productively spending time with friends, advancing each others’ goals, having fun, and (hopefully) having an impact). I think EA groups are uniquely situated to provide this level of support and friendship. When cultivated properly, you’d be surprised what a small group of dedicated, ambitious students can accomplish together (examples below, and in this post).
Clear Vision and Strategy for your Group
The specifics of what EA groups should do are unclear, and differ from group to group (based on factors like the demographic you’re serving, the geopolitical context of your country/city/university/etc), but there are some common themes. My strategy for Stanford EA is as follows:
Primary Goal: Do as much good as possible and generate as much counterfactual impact as possible, based off the principles and worldview laid out here. What does this look like?
Finding, developing, and motivating highly thoughtful, impartially altruistic, talented, ambitious community members who are figuring out what “doing the most good” means, and actually doing it.
One helpful proxy for the primary goal is optimizing the number of members who are pursuing career plans based on trying to do the most good possible, and are highly engaged with the EA community (group organizing makes this engagement much more likely, but is not necessarily required). This will likely involve:
Running programming that gets people to substantively engage with EA over many hours (I highly recommend fellowships for this purpose—learn more about fellowships here)
Making sure everyone who might be interested hears about the group and how to get involved (great tips on advertising can be found here—thanks Marie Buhl and Bella Forristal!)
Get others excited about EA, your EA group, and community building! The main arguments that I and others find compelling:
Multiplier effect—your career only has 80,000 hours, but it will likely take far less than that to counterfactually get someone else to also pivot to a high-impact career.
University is perhaps the best place to find a large number of extremely altruistic, hard-working talented people with a lot of flexibility and free time, who are actively trying to figure out what to do with the rest of their lives (and career in particular)
There is lots of low hanging fruit in the community building space (hence Stanford’s ability to become a top group within a year)
Coordinate with and leverage the broader community:
Share your experiences and resources—When you find things that work well (or don’t), share them with the broader community (like this document!)
On the flipside, ask for help, and don’t reinvent the wheel when unnecessary (though sometimes innovation can be very important—but it’s often better and easier to start off with something concrete and iterate rather than building up from scratch). As a newer organizer, you’ll probably be doing more of this (asking for help) at the beginning, and that’s OK! We are happy to help!
Stay up to date on thinking in the community—knowing about current thinking about cause prioritization and the most promising solutions to issues, the funding landscape, talent bottlenecks, and opportunities for students to contribute helps inform how to advise students in your group about career paths, skills they should build, what concrete things they can do in the near term, and who to reach out to.
Network, network, network (expanded below)
Create a culture of being ambitious, trying things, and striving for excellence—I think EA groups are currently nowhere near their ultimate potential (this is definitely the case for Stanford). There’s a lot more to explore, and if something that seems good hasn’t been done, there’s a decent chance no group has thought to do it, or hasn’t had the capacity. Regarding the culture of entrepreneurship and ambition, things Stanford EA board members have done (within their first year of getting involved!) include:
Running global intro fellowships and Precipice reading groups attended by hundreds of new community members (Michael)
Running a US policy speaker series, and creating syllabi for and running reading groups on AI governance + existential risk and social sciences
Rehauling our onboarding process and running much of our 70-person summer research internship on existential risks and longtermism for SERI (Oliver, though the summer program is a multi-organizer operation)
Running an online conference on existential risks attended by over 1000 attendees from 50+ countries (many)
Staying Connected to the Community
Being highly connected with the broader EA community has been incredibly valuable. I did a lot of outreach and networking on behalf of Stanford EA (details below in the networking bullet point) when I started my community building grant. This has been useful for many reasons. I based our first intro fellowship off Harvard EA’s syllabus. Our first career workshop worksheet was nearly a copy of London EA’s. I used Yale’s handover docs to determine how to run our retreats. The list goes on and on. Community members are really nice and want to help you out! :)
Secondly, being able to introduce promising, highly engaged members to the global community and EA professionals in particular has had many benefits for group members including:
Helping members realize that they can pursue a full-time career doing EA/high-impact work
Being supported by a global network of professionals, organizations, and funders
Fruitful collaborations (e.g. developing the US Policy speaker series), workshops (attending CFAR and AIRCS workshops and SERI mentorship), and other opportunities (e.g. funding to run a high school summer program on EA).
Connections to relevant experts in their fields of interest
Some helpful community resources and people I wanted to highlight—the EA Groups Slack, Catherine Low and community building grantees (like myself!) who you can reach on the EA Groups Slack, the EA Hub Resources Page (for resources others’ have created on how to start and run groups, FB Pages: EA Online Events/Job Postings, the EA Groups Newsletter, and the EA Forum (to read extremely useful posts like this one :P).
Lessons and Best Practices
Now I’m going to share some concrete best practices for running EA groups and lessons I’ve learned from running Stanford EA, along with our current set of programming.
Students are much more receptive to career-focused messaging than donation-focused messaging. For example, when marketing our intro EA fellowship, mentioning career planning resources and mentorship seemed to be most successful at getting students to sign up. This could be for many reasons.
Firstly, many students are (or feel) financially insecure, and are often not working, not earning very much if they are working, or relying on parents, scholarships, etc.
Secondly, many people feel negatively towards donations in general, since donations open up a can of worms including questions around class inequality, how the wealthy have ended up in positions with access to disposable resources, whose right is it to decide how this wealth is (re)distributed, finding bandaid solutions with the system vs. reform, the list goes on and on. People do not have similar reservations about careers.
Thirdly, and perhaps most importantly, what to do with the rest of our lives is the primary question most students are trying to answer. Universities give woefully inadequate support and information about careers in general, and even less about high-impact careers. Many students would love nothing more than to find an intellectually, financially, and ethically fulfilling career (though the relative importance of those characteristics differs a lot person to person (sads)). EA groups are very well equipped to address this inadequacy (thank you 80,000 Hours!). Where else can you find a group of highly altruistic, talented, hardworking students earnestly trying to help each other land their dream jobs tackling the world’s most pressing problems? (Not at slimy pre-professional clubs that are actually zero-sum and snaky 🐍).
Recurring programming is much better than one-off programming for retention: By recurring programming I mean programming with expected attendance and engagement (e.g. doing readings/exercises before meeting), ideally on a weekly or more frequent basis—so prioritizing programming like fellowships, reading groups, career intensives, recurring socials, and research programs over one-off talks, socials and workshops (without followup). We didn’t get any new committed members for a few years when we just had weekly discussions and a few talks. After starting our fellowship, we’re getting a couple of new highly engaged members every quarter.
There is a lot of funding available in the meta EA, community building and longtermist space. Take advantage of this (as long as funding is going towards making the world a better place). I am still figuring out how to do this well. If food drastically increases attendance and engagement, get food for meetings. If professional designers can make posters that lead to far more people signing up for your programs, hire designers. If paying facilitators makes them do a better job or increases capacity, consider paying facilitators. I made another post about how EA groups might use money.
There is a lot of low hanging fruit in the community building space. You can do a lot without much prior experience (though knowing your stuff is important)
Learn, reflect, write and talk with people about EA (a lot): This is extremely important for many reasons:
Knowing what you’re trying to optimize for in the first place: Given that we are trying to do the most good possible, thinking about what this entails is quite important. Power law distributions exist for the impact of work in different causes (in terms of scale/neglectedness/tractability), and the impact of different interventions within causes (e.g. deworming or educating parents about the benefits of education compared to free uniforms).
Crucial considerations about population ethics, moral philosophy, moral circle expansion, and other topics (e.g. longtermism, the meat-eater problem, the mere addition paradox, long-term indirect effects) might make the signs of certain actions and causes un-intuitively flip, or at the very least, make them less clearly highly effective (and conversely, things that seem un-intuitive or weird might be extremely important—like the wellbeing of animals and future people, and the likelihood of extinction this century).
Getting better at optimizing for what you want—many people, especially in this community, have similar goals to yours, so reading what’s already out there, and talking to people thinking about similar questions, can make you much better at achieving your (and the broader community’s) goals
Effectively communicating your vision and goals with others: You can communicate with the people you interact with better, have well-reasoned thoughts on how to best improve the world, have well-thought out responses to common questions, misconceptions, and criticisms; while also being able to teach and signal knowledge and commitment to other community members.
Leaving strong first impressions and minimizing reputation risks: Practicing how to introduce and talk about EA well multiple times, and knowing why you believe what you believe (and counter arguments and responses to them) and how to articulate your ideas well is quite important for being a strong community builder and for motivating others to get more involved.
Concrete recommendations for learning more (I’d love to hear recommendations from others!):
The Scout Mindset (for developing critical thinking/good judgment)
Strangers Drowning (more motivation to put the ideas into practice)
Participating in (and then facilitating) intro, in-depth fellowships, career programs, and reading groups (check out EA Virtual Programs to sign up (or your local EA group), and the EA Hub Resources page for resources on running these programs.
Advertise aggressively: Make sure people who would be interested in your group if they knew what it was find out about it and are able to get involved. This could look like making a website, a mailing list, an FB page (and inviting all your friends to like it), Instagram, inviting friends personally to events or programming (highly recommend), posting on class social media pages (especially first and second years—undergrad and postgrad), emailing every relevant mailing list you have send-access to, physical posters, and anything else you can think of.
Places to advertise: Check out this awesome resource for recommendations. I also shared my experience with Stanford in this doc. I’ve also included other places to advertise in the “Staying Connected to the Community” section.
There’s lots of good EA content out there to advertise that’s intro-friendly, like what I’ve mentioned above.
Focus on retention and deep engagement over shallow engagement: Highly engaged members planning on dedicating their career to doing the most good will likely be much more impactful than people who are sympathetic to the ideas, but not enough to change their career (or donation) plans. Additionally, highly engaged members will help you grow your group. Many high priority cause areas within EA (especially meta and longtermist causes) are largely constrained by people not funding. As Ben Todd discusses here, these causes need more highly dedicated, altruistic, talented people willing to use their careers to make progress.
Examples: Concrete steps Stanford EA has taken include running an in-depth fellowship after our intro fellowship:
Sib-fam program for new members to befriend more experienced members,
a career-planning intensive,
a system for checking in on formerly involved members—quarterly calls and check-ins (currently being set up),
and several reading groups.
Prioritize 1:1s and personalized programming: When I think about the most transformative moments in my EA journey, I think nearly all of them are either readings (e.g. Strangers Drowning) or 1:1 conversations. Engaging with EA is very personal—especially when it comes to deciding how EA will guide your career path. 1:1s also offer you the chance to demonstrate how a real person puts abstract EA concepts into practice. I intellectually understood what opportunity cost, counterfactuals, and expected value meant, but seeing them come up constantly in discussions about cause prioritization, what to do with members’ careers, or how to spend our limited SEA exec member time, made these concepts, and the importance of their repeated application, much more salient.
As for the importance of personalized programming—when Stanford EA first ran our fellowship, we ran one big section of 15-20 fellows in one big weekly discussion. Despite weekly catered food, only one fellow got more involved with the group afterwards, and that was largely due to friendship with an exec member (not the fellowship).
I then decided to switch our model to 2:1 to 5:1 (fellow:organizer) small groups (largely depending on capacity as I’ve had the most success with 3:1 groups so far). This increased attendance, reading completion and engagement, the ability to personally address questions, criticisms, and key takeaways from the material, and also led to fellows befriending organizers.
Since this change, we’ve regularly had 5-10 Stanford fellows express interest in getting more involved after the fellowship, and 1-3 becoming highly engaged organizers per quarter.
Make getting involved very clear and easy: Stanford EA is still figuring out how to best do this, but our (simplified) current pipeline is: Intro fellowship → In Depth Fellowship/ Sib Fam/Joining Organizer Team + attend socials/events →Career Intensive/Reading Groups/lots of 1:1s with experienced members → Board Member (running the aforementioned things, and thinking of other high impact things to do with their time + career).
Emphasize the next step and best way to get more involved multiple times (e.g. post-intro fellowship email, (multiple) 1:1s with eager fellowship graduates, communication on social media right before deadlines).
Regularly check in with people who have previously expressed interest and then dropped off (set up a community database and reminder system to make sure this happens).
Make group involvement valuable to members: Make members want to get more involved with the group, rather than having it be a chore that they must partake in begrudgingly due to their moral scruples.
Ways EA group involvement can be beneficial to group members:
Involvement advances their careers, improves their social lives and integrates them into a community, networking and professional connections, enhances their cause prioritization, teaches how to think and be an agent, how to improve time management, productivity, reliability, etc.
Make your group a tight-knit community, team, and support network
Programming Suggestions (more details can be found here):
*Default Daily Meals* - Stanford EA has a default dining hall (location) and time of day for daily lunches (this would work for dinners too) - if you don’t have another place to be for lunch, there’s a default time and place to eat and hang out with other SEA members. This has worked quite well for people who attended to become better friends.
It can also be good to invite newer people outside the core group for meals (either just with yourself or with the broader group) to integrate peripheral members into the community, and prevent cliquiness.
Group chats—Group chats have also been a great way for SEA members to become better friends with each other. While they started out mostly for quick response-time SEA administrative tasks, our group chat over time slowly turned into a friend group chat (especially from socializing after doing EA organizing work)
1:1s (meals, walks, chats, calls, Donut)
Coworking (in person and online)
Sib-fams and other opportunities for people to spend time together weekly (or more often!) and bond (fellowships can also serve this function)
Living together (can be incredible (it was for me), but not always a good idea—see brief discussion in the comments)
Accountability buddies and goal setting
Other recurring social programming
Board game nights, dinners, etc.
Prioritize succession: Prioritize finding at least one or two highly engaged members who will reliably keep the group running for the next ideally 3+ years if you were to leave, since this is a big issue for many EA groups.
Advice from Yale: try not to have someone leaving the group the following year as the sole president (so they can advise the next president(s) the year after). Many former highly successful groups have (almost) died (including Stanford), so this is a very real concern.
Be proactive and experimentative: No one really knows how to run an EA group perfectly and we’re all trying to figure it out together (see above), so try things and don’t just stick to the beaten path!
Be ambitious and always think about how to scale things up—Community Building/meta-EA work rewards time investment and getting others involved. Returns can be super-linear to effort and number of people (e.g. maybe you can’t run a fellowship with just one person, but can with three). It is possible to make significant contributions without having been involved for very long.
As I mentioned at the beginning of the post, Stanford EA had effectively one real member until July of 2019. But within the last two years Stanford EA has become one of the largest EA groups in the world.
We saw that there were very few internship opportunities for longtermist students, so even though we only started last year SERI is running a 70-person internship program for existential-risk-interested students, and ran a conference on existential risks attended by over 1000 people (both run almost exclusively by student organizers and myself).
When the pandemic hit, we realized the online transition was a great opportunity to open up (nearly all of) our programming (specifically our intro/in-depth fellowships, Precipice reading group, SERI summer program and conference) to the global EA community, and helped lay the foundation for the global EA virtual programs. This decision has played an important role in the creation of new EA groups, and in one case, even a new existential risk initiative and summer research program. And there is so much low-hanging fruit that has not yet been picked! That being said, coordination is important. If you are interested in scalable student community building, consider applying to the EA Infrastructure fund, and/or reach out to me (email@example.com) to discuss ideas.
Try to have a strong reputation, especially with your target audience: When you’re thinking about doing work that is public-facing or has a large potential audience, checking in with others to make sure the upsides outweigh the downsides (which can easily not be the case because of things as seemingly trivial as wording!) can prevent PR risks and bad first impressions, can give you more perspectives on what others find compelling/interesting, and more. Regarding your group’s reputation, two helpful questions are “how do outsiders/new people (especially people who could be highly impactful if they got really involved) perceive the group? Do we look like really smart, thoughtful, caring people working hard to improve the world, or do we look like preachy, out-of-touch, fanatical technocrats? How can we make our reputation more of the former?”
Network and communicate with others a lot (both inside and outside the community): It’s good for yourself (for professional reasons, developing social skills, learning more about the world, etc) and other group members (who you can then put in touch with relevant connections).
Communicate with other organizers/the EA community: the EA community is almost always very eager to help others out, by sharing resources, offering to give talks/Q&As/workshops, edit blog posts, and more. If you want to run something, chances are, other people have too, so ask others for resources, advice, etc. Specific community resources are mentioned above.
EAG(x) conferences are a great opportunity to network. Most people I know at EAG try to fill up their whole conference with meetings with cool community members—take advantage of that! A simple “Hello, I run Stanford EA and thought it might be good for us to discuss potential collaborations or to connect group members interested in [insert potential reason for connection] with you interested in learning about your work.“ has a pretty high response rate.
Some new updates I’d like to highlight—we’re running a sib-fam program, are setting up a CRM system (through Airtable) to set up recurring (quarterly) calls/check-in messages with formerly active members, and will likely be rolling out a High Impact Careers program in the fall (I will probably share a draft soon in another post). Feel free to contact me (firstname.lastname@example.org) if you’d like to discuss any of these.
Sharing Best Practices and Feedback
I hope my experiences and advice are helpful! I’d love to hear your experiences, best practices, and lessons learned in the comments. I’d also appreciate feedback and criticism on my above suggestions.