University EA Groups Need Fixing

(Cross-posted from my website.)

I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don’t claim to have a complete picture of the university group ecosystem.

Disclaimer: I’ve written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal.

My EA Experience

During my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on the street told me it was a cult, but I was intrigued. Every week, my friend would return from the fellowship and share what he learned. I was fascinated. Once spring rolled around, I applied for the spring Arete (Introductory) Fellowship.

After enrolling in the fellowship, I quickly fell in love with effective altruism. Everything about EA seemed just right—it was the perfect club for me. EAs were talking about the biggest and most important ideas of our time. The EA community was everything I hoped college to be. I felt like I found my people. I found people who actually cared about improving the world. I found people who strived to tear down the sellout culture at Columbia.

After completing the Arete Fellowship, I reached out to the organizers asking how I could get more involved. They told me about EA Global San Francisco (EAG SF) and a longtermist community builder retreat. Excited, I applied to both and was accepted. Just three months after getting involved with EA, I was flown out to San Francisco to a fancy conference and a seemingly exclusive retreat.

EAG SF was a lovely experience. I met many people who inspired me to be more ambitious. My love for EA further cemented itself. I felt psychologically safe and welcomed. After about thirty one-on-ones, the conference was over, and I was on my way to an ~exclusive~ retreat.

I like to think I can navigate social situations elegantly, but at this retreat, I felt totally lost. All these people around me were talking about so many weird ideas I knew nothing about. When I’d hear these ideas, I didn’t really know what to do besides nod my head and occasionally say “that makes sense.” After each one-on-one, I knew that I shouldn’t update my beliefs too much, but after hearing almost every person talk about how AI safety is the most important cause area, I couldn’t help but be convinced. By the end of the retreat, I went home a self-proclaimed longtermist who prioritized AI safety.

It took several months to sober up. After rereading some notable EA criticisms (Bad Omens, Doing EA Better, etc.), I realized I got duped. My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety. Nowadays, when I tell people I might not be a longtermist and don’t prioritize AI safety, the burden of proof is on me to explain why I “dissent” from EA. If you’re a longtermist AI safety person, there’s no need to offer evidence to defend your view.

(I would be really excited if more experienced EAs asked EA newbies why they take AI safety seriously more often. I think what normally happens is that the experienced EA gets super excited and thinks to themselves “how can I accelerate this person on their path to impact?” The naïve answer is to point them only towards upskilling and internship opportunities. Asking the newbie why they prioritize AI safety may not seem immediately useful and may even convince them not to prioritize AI safety, God forbid!)

I became President of Columbia EA shortly after returning home from the EAG SF and the retreat, and I’m afraid I did some suboptimal community building. Here are two mistakes I made:

  1. In the final week of the Arete Fellowship (I was facilitating), I asked the participants what they thought the most pressing problem was. One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on). I think that poor epistemics may often be a central part of the mechanism that leads people to prioritize AIS after completing the Arete Fellowship. Unfortunately, rather than flagging this as epistemically shaky and supporting those members to better develop their epistemics, I instead dedicated my time and resources to push them to apply to EAG(x)’s, GCP workshops, and our other advanced fellowships. I did not follow up with the others in the cohort.

  2. I hosted a retreat with students from Columbia, Cornell, NYU, and UPenn. All participants were new EAs (either still completing Arete or just finished Arete). I think I felt pressure to host a retreat because “that’s what all good community builders do.” The social dynamics at this retreat were pretty solid (in my opinion), but afterwards I felt discontent. I had not convinced any of the participants to take EA seriously, and I felt like I had failed. Even though I knew that convincing people of EA wasn’t necessarily the goal, I still implicitly aimed for that goal.

I served as president for a year and have since stepped down and dissociated myself from EA. I don’t know if/​when I will rejoin the community, but I was asked to share my concerns about EA, particularly university groups, so here they are!

Epistemic Problems in Undergraduate EA Communities

Every highly engaged EA I know has converged on AI safety as the most pressing problem. Whether or not they have a background in AI, they have converged on AI safety. The notable exceptions are those who were already deeply committed to animal welfare or those who have a strong background in biology. The pre-EA animal welfare folks pursue careers in animal welfare, and the pre-EA biology folks pursue careers in biosecurity. To me, some of these notable exceptions may not have performed rigorous cause prioritization. For students who converge on AI Safety, I also think it’s unlikely that they have performed rigorous cause prioritization. I don’t think this is that bad because cause prioritization is super hard, especially if your cause prioritization leads you to work on a cause you have no prior experience in. But, I am scared of a community that emphasizes the importance of cause prioritization yet few people actually cause prioritize.

Perhaps, people are okay with deferring their cause prioritization to EA organizations like 80,000 Hours, but I don’t think many people would have the guts to openly admit that their cause prioritization is a result of deferral. We often think of cause prioritization as key to the EA project and to admit to deferring on one’s cause prioritization is to reject a part of the Effective Altruism project. I understand that everyone has to defer on significant parts of their cause prioritization, but I am very concerned with just how little cause prioritization seems to be happening at my university group. I think it would be great if more university group organizers encourage their members to focus on cause prioritization. I think if groups started organizing writing fellowships where people focus on working through their cause prioritization, we could make significant improvements.

My Best Guess on Why AI Safety Grips Undergraduate Students

The college groups that I know best, including Columbia EA, seem to function as factories for churning out people who care about existential risk reduction. Here’s how I see each week of the Arete (Intro) Fellowship play out.

  1. Woah! There’s an immense opportunity to do good! You can use your money and your time to change the world!

  2. Wow! Some charities are way better than others!

  3. Empathy! That’s nice. Let’s empathize with animals!

  4. Doom! The world might end?! You should take this more seriously than everything we’ve talked about before in this fellowship

  5. Longtermism! You should care about future beings. Oh, you think that’s a weird thing to say? Well, you should take ideas more seriously!

  6. AI is going to kill us all! You should be working on this. 80k told me to tell you that you should work on this.

  7. This week we’ll be discussing WHAT ~YOU~ THINK! But if you say anything against EA, I (your facilitator) will lecture for a few minutes defending EA (sometimes rightfully so, other times not so much)

  8. Time to actually do stuff! Go to EAG! Go to a retreat! Go to the Bay!

I’m obviously exaggerating what the EA fellowship experience is like, but I think this is pretty close to describing the dynamics of EA fellowships, especially when the fellowship is run by an inexperienced, excited, new organizer. Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers. I am especially fearful that almost every person who becomes highly engaged due to their college group is going to have world views and cause prioritizations that are strikingly similar to those who compiled the EA handbook (intro fellowship syllabus) and AGISF.

It may be that AI safety is in fact the most important problem of our time, but there is an epistemic problem in EA groups that cannot be ignored. I’m not willing to trade off epistemic health for churning out more excellent AI safety researchers (This is an oversimplification. I understand that some of the best AI researchers have excellent epistemics as well). Some acclaimed EA groups might be excellent at churning out competent AI safety prioritizers, but I would rather have a smaller, epistemically healthy group that embarks on the project of effective altruism.

Caveats

I suspect that I overestimate how much facilitators influence fellows’ thinking. I think that the people who become highly engaged don’t become highly engaged because their facilitator was very persuasive (persuasiveness is a smaller part); rather, people become highly engaged because they already had worldviews that mapped closely to EA.

How Retreats Can Foster an Epistemically Unhealthy Culture

In this section, I will argue that retreats cause people to take ideas seriously when they perhaps shouldn’t. Retreats make people more susceptible to buying into weird ideas. Those weird ideas may in fact be true, but the process of buying into those weird ideas rests on shaky epistemics grounds.

Against Taking Ideas Seriously

According to LessWrong, “Taking Ideas Seriously is the skill/​habit of noticing when a new idea should have major ramifications.” I think taking ideas seriously can be a useful skill, but I’m hesitant when people encourage new EAs to take ideas seriously.

Scott Alexander warns against taking ideas seriously:

for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy. Or, at the very least, it should be the last skill you learn, after you’ve learned every other skill that allows you to know which ideas are or are not correct. The people I know who are best at taking ideas seriously are those who are smartest and most rational. I think people are working off a model where these co-occur because you need to be very clever to resist your natural and detrimental tendency not to take ideas seriously. But I think they might instead co-occur because you have to be really smart in order for taking ideas seriously not to be immediately disastrous. You have to be really smart not to have been talked into enough terrible arguments.

Why Do People Take Ideas Seriously in Retreats?

Retreats are sometimes believed to be one of the most effective university community building strategies. Retreats heavily increase people’s engagement with EA. People cite retreats as being key to their onramp to EA and taking ideas like AI safety, x-risks, and longtermism more seriously. I think retreats make people take ideas more seriously because retreats disable people’s epistemic immune system.

  1. Retreats are a foreign place. You might feel uncomfortable and less likely to “put yourself out there.” Disagreeing with the organizers, for example, “puts you out there.” Thus, you are unlikely to dissent from the views of the organizers and speakers. You may also paper over your discontents/​disagreements so you can be part of the in-group.

  2. When people make claims confidently about topics you know little about, there’s not much to do. For five days, you are bombarded with arguments for AI safety, and what can you do in response? Sit in your room and try to read arguments and counterarguments so you can be better prepared to talk about these issues the next day? Absolutely not. The point of this retreat is to talk to people about big ideas that will change the world. There’s not enough time to do the due diligence of thinking through all the new, foreign ideas you’re hearing. At this retreat, you are encouraged to take advantage of all the networking opportunities. With no opportunity to do your due diligence to read into what people are confidently talking about, you are forced to implicitly trust your fellow retreat participants. Suddenly, you will have unusually high credence in everything that people have been talking about. Even if you decide to do your due diligence after the retreat, you will be fighting an uphill battle against your unusually high prior on those “out there” takes from those really smart people at the retreat.

Other Retreat Issues

  1. Social dynamics are super weird. It can feel very alienating if you don’t know anyone at the retreat while everyone else seems to know each other. More speed friending with people you’ve never met before would be great.

  2. Lack of psychological safety

    1. I think it’s fine for conversations at retreats to be focused on sharing ideas and generating impact, but it shouldn’t feel like the only point of the conversation is impact. Friendships shouldn’t feel centered around impact. It’s a bad sign if people feel that they will jeopardize a relationship if they stop appearing to be impactful.

    2. The pressure to appear to be “in the know” and send the right virtue signals can be overwhelming, especially in group settings.

  3. Not related to retreats but similar: sending people to the Bay Area is weird. Why do people suddenly start to take longtermist, x-risk, AI safety ideas more seriously when they move to the Bay? I suspect moving to the Bay Area has similar effects as going to retreats.

University Group Organizer Funding

University group organizers should not be paid so much. I was paid an outrageous amount of money to lead my university’s EA group. I will not apply for university organizer funding again even if I do community build in the future.

Why I Think Paying Organizers May Be Bad

  1. Being paid to run a college club is weird. All other college students volunteer to run their clubs. If my campus newspaper found out I was being paid this much, I am sure an EA take-down article would be published shortly after.

  2. I doubt paying university group organizers this much is increasing their counterfactual impact much. I don’t think organizers are spending much more time because of this payment. Most EA organizers are from wealthy backgrounds, so the money is not clearing many bottlenecks (need-based funding would be great—see potential fixes section).

    1. Getting paid to organize did not make me take my role more seriously, and I suspect that other organizers did not take their roles much more seriously because of being paid. I’d be curious to read the results of the university group organizer funding exit survey to learn more about how impactful the funding was.

Potential Solutions

  1. Turn the University Group Organizer Fellowship into a need-based fellowship. This is likely to eliminate financial bottlenecks in people’s lives and accelerate their path to impact, while not wasting money on those who do not face financial bottlenecks.

  2. If the University Group Organizer Fellowship exit survey indicates that funding was somewhat helpful in increasing people’s commitment to quality community building, then reduce funding to $15/​hour (I’m just throwing this number out there; bottom line is reduce the hourly rate significantly). If the results indicate that funding had little to no impact, abandon funding (not worth the reputational risks and weirdness). I think it’s unlikely that the results of the survey indicate that the funding was exceptionally impactful.

Final Remarks

I found an awesome community at Columbia EA, and I plan to continue hanging out with the organizers. But I think it’s time I stop organizing for my mental health and the reasons outlined above. I plan to spend the next year focusing on my cause prioritization and building general competencies. If you are a university group organizer and have concerns about your community’s health, please don’t hesitate to reach out.