University EA Groups Need Fixing
(Cross-posted from my website.)
I recently resigned as Columbia EA President and have stepped away from the EA community. This post aims to explain my EA experience and some reasons why I am leaving EA. I will discuss poor epistemic norms in university groups, why retreats can be manipulative, and why paying university group organizers may be harmful. Most of my views on university group dynamics are informed by my experience with Columbia EA. My knowledge of other university groups comes from conversations with other organizers from selective US universities, but I don’t claim to have a complete picture of the university group ecosystem.
Disclaimer: I’ve written this piece in a more aggressive tone than I initially intended. I suppose the writing style reflects my feelings of EA disillusionment and betrayal.
My EA Experience
During my freshman year, I heard about a club called Columbia Effective Altruism. Rumor on the street told me it was a cult, but I was intrigued. Every week, my friend would return from the fellowship and share what he learned. I was fascinated. Once spring rolled around, I applied for the spring Arete (Introductory) Fellowship.
After enrolling in the fellowship, I quickly fell in love with effective altruism. Everything about EA seemed just right—it was the perfect club for me. EAs were talking about the biggest and most important ideas of our time. The EA community was everything I hoped college to be. I felt like I found my people. I found people who actually cared about improving the world. I found people who strived to tear down the sellout culture at Columbia.
After completing the Arete Fellowship, I reached out to the organizers asking how I could get more involved. They told me about EA Global San Francisco (EAG SF) and a longtermist community builder retreat. Excited, I applied to both and was accepted. Just three months after getting involved with EA, I was flown out to San Francisco to a fancy conference and a seemingly exclusive retreat.
EAG SF was a lovely experience. I met many people who inspired me to be more ambitious. My love for EA further cemented itself. I felt psychologically safe and welcomed. After about thirty one-on-ones, the conference was over, and I was on my way to an ~exclusive~ retreat.
I like to think I can navigate social situations elegantly, but at this retreat, I felt totally lost. All these people around me were talking about so many weird ideas I knew nothing about. When I’d hear these ideas, I didn’t really know what to do besides nod my head and occasionally say “that makes sense.” After each one-on-one, I knew that I shouldn’t update my beliefs too much, but after hearing almost every person talk about how AI safety is the most important cause area, I couldn’t help but be convinced. By the end of the retreat, I went home a self-proclaimed longtermist who prioritized AI safety.
It took several months to sober up. After rereading some notable EA criticisms (Bad Omens, Doing EA Better, etc.), I realized I got duped. My poor epistemics led me astray, but weirdly enough, my poor epistemics gained me some social points in EA circles. While at the retreat and at EA events afterwards, I was socially rewarded for telling people that I was a longtermist who cared about AI safety. Nowadays, when I tell people I might not be a longtermist and don’t prioritize AI safety, the burden of proof is on me to explain why I “dissent” from EA. If you’re a longtermist AI safety person, there’s no need to offer evidence to defend your view.
(I would be really excited if more experienced EAs asked EA newbies why they take AI safety seriously more often. I think what normally happens is that the experienced EA gets super excited and thinks to themselves “how can I accelerate this person on their path to impact?” The naïve answer is to point them only towards upskilling and internship opportunities. Asking the newbie why they prioritize AI safety may not seem immediately useful and may even convince them not to prioritize AI safety, God forbid!)
I became President of Columbia EA shortly after returning home from the EAG SF and the retreat, and I’m afraid I did some suboptimal community building. Here are two mistakes I made:
In the final week of the Arete Fellowship (I was facilitating), I asked the participants what they thought the most pressing problem was. One said climate change, two said global health, and two said AI safety. Neither of the people who said AI safety had any background in AI. If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong (Note: prioritizing any non-mainstream cause area after Arete is epistemically shaky. By mainstream, I mean a cause area that someone would have a high prior on). I think that poor epistemics may often be a central part of the mechanism that leads people to prioritize AIS after completing the Arete Fellowship. Unfortunately, rather than flagging this as epistemically shaky and supporting those members to better develop their epistemics, I instead dedicated my time and resources to push them to apply to EAG(x)’s, GCP workshops, and our other advanced fellowships. I did not follow up with the others in the cohort.
I hosted a retreat with students from Columbia, Cornell, NYU, and UPenn. All participants were new EAs (either still completing Arete or just finished Arete). I think I felt pressure to host a retreat because “that’s what all good community builders do.” The social dynamics at this retreat were pretty solid (in my opinion), but afterwards I felt discontent. I had not convinced any of the participants to take EA seriously, and I felt like I had failed. Even though I knew that convincing people of EA wasn’t necessarily the goal, I still implicitly aimed for that goal.
I served as president for a year and have since stepped down and dissociated myself from EA. I don’t know if/when I will rejoin the community, but I was asked to share my concerns about EA, particularly university groups, so here they are!
Epistemic Problems in Undergraduate EA Communities
Every highly engaged EA I know has converged on AI safety as the most pressing problem. Whether or not they have a background in AI, they have converged on AI safety. The notable exceptions are those who were already deeply committed to animal welfare or those who have a strong background in biology. The pre-EA animal welfare folks pursue careers in animal welfare, and the pre-EA biology folks pursue careers in biosecurity. To me, some of these notable exceptions may not have performed rigorous cause prioritization. For students who converge on AI Safety, I also think it’s unlikely that they have performed rigorous cause prioritization. I don’t think this is that bad because cause prioritization is super hard, especially if your cause prioritization leads you to work on a cause you have no prior experience in. But, I am scared of a community that emphasizes the importance of cause prioritization yet few people actually cause prioritize.
Perhaps, people are okay with deferring their cause prioritization to EA organizations like 80,000 Hours, but I don’t think many people would have the guts to openly admit that their cause prioritization is a result of deferral. We often think of cause prioritization as key to the EA project and to admit to deferring on one’s cause prioritization is to reject a part of the Effective Altruism project. I understand that everyone has to defer on significant parts of their cause prioritization, but I am very concerned with just how little cause prioritization seems to be happening at my university group. I think it would be great if more university group organizers encourage their members to focus on cause prioritization. I think if groups started organizing writing fellowships where people focus on working through their cause prioritization, we could make significant improvements.
My Best Guess on Why AI Safety Grips Undergraduate Students
The college groups that I know best, including Columbia EA, seem to function as factories for churning out people who care about existential risk reduction. Here’s how I see each week of the Arete (Intro) Fellowship play out.
Woah! There’s an immense opportunity to do good! You can use your money and your time to change the world!
Wow! Some charities are way better than others!
Empathy! That’s nice. Let’s empathize with animals!
Doom! The world might end?! You should take this more seriously than everything we’ve talked about before in this fellowship
Longtermism! You should care about future beings. Oh, you think that’s a weird thing to say? Well, you should take ideas more seriously!
AI is going to kill us all! You should be working on this. 80k told me to tell you that you should work on this.
This week we’ll be discussing WHAT ~YOU~ THINK! But if you say anything against EA, I (your facilitator) will lecture for a few minutes defending EA (sometimes rightfully so, other times not so much)
Time to actually do stuff! Go to EAG! Go to a retreat! Go to the Bay!
I’m obviously exaggerating what the EA fellowship experience is like, but I think this is pretty close to describing the dynamics of EA fellowships, especially when the fellowship is run by an inexperienced, excited, new organizer. Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers. I am especially fearful that almost every person who becomes highly engaged due to their college group is going to have world views and cause prioritizations that are strikingly similar to those who compiled the EA handbook (intro fellowship syllabus) and AGISF.
It may be that AI safety is in fact the most important problem of our time, but there is an epistemic problem in EA groups that cannot be ignored. I’m not willing to trade off epistemic health for churning out more excellent AI safety researchers (This is an oversimplification. I understand that some of the best AI researchers have excellent epistemics as well). Some acclaimed EA groups might be excellent at churning out competent AI safety prioritizers, but I would rather have a smaller, epistemically healthy group that embarks on the project of effective altruism.
Caveats
I suspect that I overestimate how much facilitators influence fellows’ thinking. I think that the people who become highly engaged don’t become highly engaged because their facilitator was very persuasive (persuasiveness is a smaller part); rather, people become highly engaged because they already had worldviews that mapped closely to EA.
How Retreats Can Foster an Epistemically Unhealthy Culture
In this section, I will argue that retreats cause people to take ideas seriously when they perhaps shouldn’t. Retreats make people more susceptible to buying into weird ideas. Those weird ideas may in fact be true, but the process of buying into those weird ideas rests on shaky epistemics grounds.
Against Taking Ideas Seriously
According to LessWrong, “Taking Ideas Seriously is the skill/habit of noticing when a new idea should have major ramifications.” I think taking ideas seriously can be a useful skill, but I’m hesitant when people encourage new EAs to take ideas seriously.
Scott Alexander warns against taking ideas seriously:
for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy. Or, at the very least, it should be the last skill you learn, after you’ve learned every other skill that allows you to know which ideas are or are not correct. The people I know who are best at taking ideas seriously are those who are smartest and most rational. I think people are working off a model where these co-occur because you need to be very clever to resist your natural and detrimental tendency not to take ideas seriously. But I think they might instead co-occur because you have to be really smart in order for taking ideas seriously not to be immediately disastrous. You have to be really smart not to have been talked into enough terrible arguments.
Why Do People Take Ideas Seriously in Retreats?
Retreats are sometimes believed to be one of the most effective university community building strategies. Retreats heavily increase people’s engagement with EA. People cite retreats as being key to their onramp to EA and taking ideas like AI safety, x-risks, and longtermism more seriously. I think retreats make people take ideas more seriously because retreats disable people’s epistemic immune system.
Retreats are a foreign place. You might feel uncomfortable and less likely to “put yourself out there.” Disagreeing with the organizers, for example, “puts you out there.” Thus, you are unlikely to dissent from the views of the organizers and speakers. You may also paper over your discontents/disagreements so you can be part of the in-group.
When people make claims confidently about topics you know little about, there’s not much to do. For five days, you are bombarded with arguments for AI safety, and what can you do in response? Sit in your room and try to read arguments and counterarguments so you can be better prepared to talk about these issues the next day? Absolutely not. The point of this retreat is to talk to people about big ideas that will change the world. There’s not enough time to do the due diligence of thinking through all the new, foreign ideas you’re hearing. At this retreat, you are encouraged to take advantage of all the networking opportunities. With no opportunity to do your due diligence to read into what people are confidently talking about, you are forced to implicitly trust your fellow retreat participants. Suddenly, you will have unusually high credence in everything that people have been talking about. Even if you decide to do your due diligence after the retreat, you will be fighting an uphill battle against your unusually high prior on those “out there” takes from those really smart people at the retreat.
Other Retreat Issues
Social dynamics are super weird. It can feel very alienating if you don’t know anyone at the retreat while everyone else seems to know each other. More speed friending with people you’ve never met before would be great.
Lack of psychological safety
I think it’s fine for conversations at retreats to be focused on sharing ideas and generating impact, but it shouldn’t feel like the only point of the conversation is impact. Friendships shouldn’t feel centered around impact. It’s a bad sign if people feel that they will jeopardize a relationship if they stop appearing to be impactful.
The pressure to appear to be “in the know” and send the right virtue signals can be overwhelming, especially in group settings.
Not related to retreats but similar: sending people to the Bay Area is weird. Why do people suddenly start to take longtermist, x-risk, AI safety ideas more seriously when they move to the Bay? I suspect moving to the Bay Area has similar effects as going to retreats.
University Group Organizer Funding
University group organizers should not be paid so much. I was paid an outrageous amount of money to lead my university’s EA group. I will not apply for university organizer funding again even if I do community build in the future.
Why I Think Paying Organizers May Be Bad
Being paid to run a college club is weird. All other college students volunteer to run their clubs. If my campus newspaper found out I was being paid this much, I am sure an EA take-down article would be published shortly after.
I doubt paying university group organizers this much is increasing their counterfactual impact much. I don’t think organizers are spending much more time because of this payment. Most EA organizers are from wealthy backgrounds, so the money is not clearing many bottlenecks (need-based funding would be great—see potential fixes section).
Getting paid to organize did not make me take my role more seriously, and I suspect that other organizers did not take their roles much more seriously because of being paid. I’d be curious to read the results of the university group organizer funding exit survey to learn more about how impactful the funding was.
Potential Solutions
Turn the University Group Organizer Fellowship into a need-based fellowship. This is likely to eliminate financial bottlenecks in people’s lives and accelerate their path to impact, while not wasting money on those who do not face financial bottlenecks.
If the University Group Organizer Fellowship exit survey indicates that funding was somewhat helpful in increasing people’s commitment to quality community building, then reduce funding to $15/hour (I’m just throwing this number out there; bottom line is reduce the hourly rate significantly). If the results indicate that funding had little to no impact, abandon funding (not worth the reputational risks and weirdness). I think it’s unlikely that the results of the survey indicate that the funding was exceptionally impactful.
Final Remarks
I found an awesome community at Columbia EA, and I plan to continue hanging out with the organizers. But I think it’s time I stop organizing for my mental health and the reasons outlined above. I plan to spend the next year focusing on my cause prioritization and building general competencies. If you are a university group organizer and have concerns about your community’s health, please don’t hesitate to reach out.
- Taking prioritisation within ‘EA’ seriously by 18 Aug 2023 17:50 UTC; 102 points) (
- University Groups Should Do More Retreats by 6 Apr 2022 19:20 UTC; 85 points) (
- Ideas for improving epistemics in AI safety outreach by 21 Aug 2023 19:55 UTC; 64 points) (LessWrong;
- What Has the CEA Uni Groups Team Been Up To? – Our Summer 2023 Review by 29 Sep 2023 9:50 UTC; 56 points) (
- University groups as impact-driven truth-seeking teams by 14 Mar 2024 6:43 UTC; 39 points) (
- A retrospective on EA at ENS Paris, focused on obstacles. by 13 Aug 2023 10:56 UTC; 39 points) (
- 10 Aug 2023 9:37 UTC; 36 points) 's comment on Update on cause area focus working group by (
- Ideas for improving epistemics in AI safety outreach by 21 Aug 2023 19:56 UTC; 31 points) (
- 8 Aug 2023 17:10 UTC; 27 points) 's comment on OllieBase’s Quick takes by (
- Proposal for Scaling Community Building by 26 Feb 2024 1:13 UTC; 13 points) (
- 24 Aug 2023 20:01 UTC; 10 points) 's comment on University Groups Should Do More Retreats by (
- 16 Dec 2023 9:24 UTC; 4 points) 's comment on Meetup Tip: Heartbeat Messages by (LessWrong;
- 4 Aug 2023 23:01 UTC; 3 points) 's comment on EA Funds organisational update: Open Philanthropy matching and distancing by (
Hey,
I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).
In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”.
One thing I’ll say is that core researchers are often (but not always) much more uncertain and pluralist than it seems from “the vibe”. The second half of Holden Karnofsky’s recent 80k blog post is indicative. Open Phil splits their funding across quite a number of cause areas, and I expect that to continue. Most of the researchers at GPI are pretty sceptical of AI x-risk. Even among people who are really worried about TAI in the next decade, there’s normally significant support (whether driven by worldview diversification or just normal human psychology) for neartermist or other non-AI causes. That’s certainly true of me. I think longtermism is highly non-obvious, and focusing on near-term AI risk even more so; beyond that, I think a healthy EA movement should be highly intellectually diverse and exploratory.
What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA. Currently, AI has an odd relationship to EA. Global health and development and farm animal welfare, and to some extent pandemic preparedness, had movements working on them independently of EA. In contrast, AI safety work currently overlaps much more heavily with the EA/rationalist community, because it’s more homegrown.
If AI had its own movement infrastructure, that would give EA more space to be its own thing. It could more easily be about the question “how can we do the most good?” and a portfolio of possible answers to that question, rather than one increasingly common answer — “AI”.
At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss. EA qua EA, which can live and breathe on its own terms, still has huge amounts of value: if AI progress slows; if it gets so much attention that it’s no longer neglected; if it turns out the case for AI safety was wrong in important ways; and because there are other ways of adding value to the world, too. I think most people in EA, even people like Holden who are currently obsessed with near-term AI risk, would agree.
As someone who is extremely pro investing in big-tent EA, my question is, “what does it look like, in practice, to implement ‘AI safety...should have its own movement, separate from EA’?”
I do think it is extremely important to maintain EA as a movement centered on the general idea of doing as much good as we can with limited resources. There is serious risk of AIS eating EA, but the answer to that cannot be to carve AIS out of EA. If people come to prioritize AIS from EA principles, as I do, I think it would be anathema to the movement to try to push their actions and movement building outside the EA umbrella. In addition, EA being ahead of the curve on AIS is, in my opinion, a fact to embrace and treat as evidence of the value of EA principles, individuals, and movement building methodology.
To avoid AIS eating EA, we have to keep reinvesting in EA fundamentals. I am so grateful and impressed that Dave published this post, because it’s exactly the kind of effort that I think is necessary to keep EA EA. I think he highlights specific failures in exploiting known methods of inducing epistemic … untetheredness?
For example, I worked with CFAR where the workshops deliberately employed the same intensive atmosphere to get people to be receptive to new ways of thinking and being actually open to changing their minds. I recognized that this was inherently risky, and was always impressed that the ideas introduced in this state were always about how to think better rather than convince workshop participants of any conclusion. Despite many of the staff and mentors being extremely convinced of the necessity of x-risk mitigation, I never once encountered discussion of how the rationality techniques should be applied to AIS.
To hear that this type of environment is de facto being used to sway people towards a cause prioritization, rather than how to do cause prio makes me update significantly away from continuing the university pipeline as it currently exists. The comments on the funding situation are also new to me and seem to represent obvious errors. Thanks again Dave for opening my eyes to what’s currently happening.
“what does it look like, in practice, to implement ‘AI safety...should have its own movement, separate from EA’?”
Creating AI Safety focused Conferences, AI Safety university groups and AI Safety local meet-up groups? Obviously attendees will initially overlap very heavily with EA conferences and groups but having them separated out will lead to a bit of divergence over time
Wouldn’t this run the risk of worsening the lack of intellectual diversity and epistemic health that the post mentions? The growing divide between long/neartermism might have led to tensions, but I’m happy that at least there’s still conferences, groups and meet-ups where these different people are still talking to each other!
There might be an important trade-off here, and it’s not clear to me what direction makes more sense.
I don’t think there’s much of a trade-off, I’d expect a decent proportion of AI Safety people to still be coming to EA conferences
I am all for efforts to do AIS movement building distinct from EA movement building by people who are convinced by AIS reasoning and not swayed by EA principles. There’s all kinds of discussion about AIS in academic/professional/media circles that never reference EA at all. And while I’d love for everyone involved to learn about and embrace EA, I’m not expecting that. So I’m just glad they’re doing their thing and hope they’re doing it well.
I could probably have asked the question better and made it, “what should EAs do (if anything), in practice to implement a separate AIS movement?” Because then it sounds like we’re talking about making a choice to divert movement building dollars and hours away from EA movement building to distinct AI safety movement building, under the theoretical guise of trying to bolster the EA movement against getting eaten by AIS? Seems obviously backwards to me. I think EA movement building is already under-resourced, and owning our relationship with AIS is the best strategic choice to achieve broad EA goals and AIS goals.
Or, the ideal form for the AI safety community might not be a “movement” at all! This would be one of the most straightforward ways to ward off groupthink and related harms, and it has been possible for other cause areas, for instance, global health work mostly doesn’t operate as a social movement.
Global health outside of EA may not have the issues associated with being a movement, but it has even bigger issues.
I wonder how this would look different from the current status quo:
Wytham Abbey cost £15m, and its site advertises it as basically being primarily for AI/x-risk use (as far as I can see it doesn’t advertise what it’s been used for to date)
Projects already seem to be highly preferentially supported based on how longtermist/AI-themed they are. I recently had a conversation with someone at OpenPhil in which, if I understood/remembered correctly, they said the proportion of OP funding going to nonlongtermist stuff was about 10%. [ETA sounds like this is wrong]
The global health and development fund seems to have been discontinued . The infrastructure fund, I’ve heard on the grapevine, strongly prioritises projects with a longtermist/AI focus. The other major source of money in the EA space is the Survival and Flourishing Fund, which lists its goal as ‘to bring financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing’. The Nonlinear Network is also explicitly focused on AI safety, and the metacharity fund is nonspecific. The only EA-facing fund I know of that excludes longtermist concerns is the animal welfare one. Obviously there’s also Givewell, but they’re not really part of the EA movement, inasmuch as they only support existing and already-well-developed-and-evidenced charities and not EA startups/projects/infrastructure like the other funding groups mentioned do.
These three posts by very prominent EAs all make the claim that we should basically stop talking about either EA and/or longtermism and just tell people they’re highly likely to die from AI (thus guiding them to ignore the—to my mind comparable—risks that they might die from supervolcanoes, natural or weakly engineered pandemics, nuclear war, great power war, and all the other stuff that longtermists uniquely would consider to be of much lesser importance because of the lower extinction risk).
And anecdotally, I share the OP’s experience that AI risk dominates EA discussion at EA cocktail parties.
To me this picture makes everything but AI safety already look like an afterthought.
Regarding the funding aspect:
As far as I can tell, Open Phil has always given the majority of their budget to non-longtermist focus areas.
This is also true of the EA portfolio more broadly.
GiveWell has made grants to less established orgs for several years, and that amount has increased dramatically of late.
Holden also stated in his recent 80k podcast episode that <50% of OP’s grantmaking goes to longtermist areas.
I realise I didn’t make this distinction, so I’m shifting the goalposts slightly, but I think it’s worth distinguishing between ‘direct work’ organisations and EA infrastructure. It seems pretty clear from the OP that the latter is being strongly encouraged to primarily support EA/longtermist work.
Im a bit confused about the grammar of the last sentence—are you saying that EA infrastructure is getting more emphasis than direct work, or that people interested in infrastructural work are being encouraged to primarily support longtermism?
Sorry—the latter.
I’d imagine it’s much harder to argue that something like community building is cost-effective within something like global health, than within longtermist focused areas? There’s much more capacity to turn money into direct work/bednets, and those direct options seem pretty hard to beat in terms of cost effectiveness.
Community building can be nonspecific, where you try to get a build a group of people who have some common interest (such as something under big tent EA), or specific, where you try to get people who are working on some specific thing (such as working on AI/longtermist projects, or moving in that direction). My sense is that (per the OP), community builders are being pressured to do the latter.
The theory of change for community building is much stronger for long-termist cause areas than for global poverty.
For global poverty, it’s much easier to take a bunch of money and just pay people outside of the community to do things like hand out bed nets.
For x-risk, it seems much more valuable to develop a community of people who deeply care about the problem so that you can hire people who will autonomously figure out what needs to be done. This compares favourably to just throwing money at the problem, in which case you’re just likely to get work that sounds good, rather than work advancing your objective.
Right, although one has to watch for a possible effect on community composition. If not careful, this will end up with a community full of x-risk folks not necessarily because x-risk is correct cause prioritization, but because it was recruited for due to the theory of change issue you identify.
This seems like a self-fulfilling prophecy. If we never put effort into building a community around ways to reduce global poverty, we’ll never know what value they could have generated.
Also it seems a priori really implausible that longtermists could usefully do more things in their sphere alone than that EAs focusing on the whole of the rest of EA-concern-space could.
Well EA did build a community around it and we’ve seen that talent is a greater bottleneck for longtermism than it is for global poverty.
The flipside argument would be that funding is a greater bottleneck for global poverty than longtermism, and one might convince university students focused on global poverty to go into earning-to-give (including entrepreneurship-to-give). So the goals of community building may well be different between fields, and community building in each cause area should be primarily judged on its contribution to that cause area’s bottleneck.
I could see a world in which the maths works out for that.
I guess the tricky thing there is that you need the amount raised with discount factor applied to exceed the cost, incl. the opportunity cost of community builders potentially earning to give themselves.
And this seems to be a much tighter constraint than that imposed by longtermist theories of change.
True—although I think the costs would be much lower for university groups run by (e.g.) undergraduate student organizers who were paid typical student-worker wages (at most). The opportunity costs would seem much stronger for community organizing by college graduates than by students working a few hours a week.
Not really responding to the comment (sorry), just noting that I’d really like to understand why these researchers at GPI and careful-thinking AI alignment people—like Paul Christiano—have such different risk estimates! Can someone facilitate and record a conversation?
David Thorstadt, who worked at GPI, Blogs about reasons for his Ai skepticism (and other EA critiques) here https://ineffectivealtruismblog.com/
Which of David’s posts would you recommend as a particularly good example and starting point?
Imo it would his Existential Risk Pessimism and the Time of Perils series (it’s based on a GPI paper of his that he also links to)
Clearly written, well-argued, and up there amongst both his best work and I think one of the better criticisms of xRisk/longtermist EA that I’ve seen.
I think he’s pointed out a fundamental tension in utilitarian calculus here, and pointed out the additional assumption that xRisk-focused EAs have to make this work—“the time of perils”, but I think plausibly argues that this assumption is more difficult to argue for that the initial two (Existential Risk Pessism and the Astronomical Value Thesis)[1]
I think it’s a rich vein of criticism that I’d like to see more xRisk-inclined EAs responed to further (myself included!)
I don’t want to spell the whole thing out here, go read those posts :)
Thanks! I read it, it’s an interesting post, but it’s not “about reasons for his Ai skepticism ”. Browsing the blog, I assume I should read this?
Depends entirely on your interests! They are sorted thematically https://ineffectivealtruismblog.com/post-series/
Specific recommendations if your interests overlap with Aaron_mai’s: 1(a) on a tension between thinking X-risks are likely and thinking reducing X-risks have astronomical value; 1(b) on the expected value calculation in X-risk; 6(a) as a critical review of the Carlsmith report on AI risk.
The object-level reasons are probably the most interesting and fruitful, but for a complete understanding of how the differences might arise, it’s probably also valuable to consider:
sociological reasons
meta-level incentive reasons
selection effects
An interesting exercise could be to go through the categories and elucidate 1-3 reasons in each category for why AI alignment people might believe X and cause prio people might believe not X.
This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?
I think (apologies if I am mis-understanding you) you try to get around this by suggesting that ‘mainstream’ causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.
I would like to second the objection to this. I feel as most intros to AI Safety, such as AGISF, are detached enough from technical AI details such that one could do the course without the need for past AI background
(This isn’t an objection to the epistemics related to picking up something non-mainstream cause area quickly, but rather about the need to have an AI background to do so)
I guess I’m unclear about what sort of background is important. ML isn’t actually that sophisticated as it turns out, it could have been, but “climb a hill” or “think about an automata but with probability distributions and annotated with rewards” just don’t rely on more than a few semesters of math.
2⁄5 doesn’t seem like very strong evidence of groupthink to me.
I also wouldn’t focus on their background, but on things like whether they were able to explain the reasons for their beliefs in their own words or tended to simply fall back on particular phrases they’d heard.
(I lead the CEA uni groups team but don’t intend to respond on behalf of CEA as a whole and others may disagree with some of my points)
Hi Dave,
I just want to say that I appreciate you writing this. The ideas in this post are ones we have been tracking for a while and you are certainly not alone in feeling them.
I think there is a lot of fruitful discussion in the comments here about strategy-level considerations within the entire EA ecosystem and I am personally quite compelled by many of the points in Will’s comment. So, I will focus specifically on some of the considerations we have on the uni group level and what we are trying to do about this. (I will also flag that I could say a lot more on each of these but my response was already getting quite long and we wanted to keep it somewhat concise)
Epistemics
We are also quite worried about epistemic norms in university groups. We have published some of our advice around this on the forum here (though maybe we should have led with more concrete examples) and I gave a talk at EAG Bay Area on it.
We also try to screen that people actually understand the arguments behind the claims they are making & common arguments against those positions. This is a large part of what we think screen for when looking for open-mindedness and truth-seeking. This is, of course, difficult and we do have false positives.
I will note, we sometimes admit people who we think don’t understand some important arguments because we expect students to generally be learning. I expect most clubs for any cause or idea to have a weaker bar though, and we still do screen for people being self-aware about the fact that they don’t understand certain arguments. We probe for this in interviews, such as by posing multiple counterarguments.
Concretely, the ~most common reason we decline to support groups (though do encourage them to reapply later) is that we think the organizers “agree with” ideas, but don’t actually understand them or the important arguments around them. So we tell them they should focus on understanding common arguments first (often by reading, for lack of a better option), etc before running a group.
Personal anecdote: Part of what drew me to EA was the openness to new ideas and truth-seeking. This was so apparently prominent in my EA group compared to many other communities I interacted with on campus who often refused to engage with certain arguments. I loved being in an intellectually vigorous environment where people did take ideas seriously and I loved that my group was so skeptical about everything. I am sad to see some spaces in the EA community not upholding these values even though I know it is based on good intentions.
Retreats
I want to apologize since I know you attended one of our summits and if you want to reach out with any additional feedback or suggestions, we would be keen to hear from you (either on the groups slack or via unigroups@centreforeffectivealtruism.org).
Retreats are definitely high-variance interventions. I do think there is more we can do to make them intellectually humble and welcoming spaces. I care a lot about psychological safety and think it is important for progress. We are always looking for feedback and ideas on how to improve this at future events and people can reach us at unigroups@centreforeffectivealtruism.org.
I do think there are big value-adds to retreats.
People normally go through their lives day-to-day not being able to set aside time to think about big ideas and how they might want to change their behaviors off of them. Retreats provide a space for this which I think is valuable.
They also make applying these ideas to your life a real possibility by showing examples of people who have done so. For many people, these are an opportunity to see “woah, you can actually work on these things!”.
While I push back on the “retreats mainly act by disabling epistemic immune systems” frame, I will say I am a huge proponent of people having other communities to go back to and safe exit strategies. I think there are some good considerations around this in this post on going to an EA hub.
Personal anecdote: The first few retreats/workshops/summits I went to were really intense and I often felt like I didn’t belong, and I think that was bad (although afaict somewhat common for retreats in other clubs with new, unfamiliar people) but I didn’t regret going to them. Reflecting back on them, I think they were hugely valuable for me as a person and for me thinking through my impact. Though, I did appreciate having a community I could return back to who could push against ideas and personally encourage people to have this.
Paying Organizers
The Open Philanthropy Organizer Fellowship is the main source of funding for organizers’ time (and they manage that fellowship themselves, without CEA’s involvement) but CEA does offer some stipends. I do think that for some (but not all) people this can have a large effect on how much time they can spend on their group and on upskilling.
I am pretty sympathetic to need-based considerations but these are pretty hard to track. We have moved to our stipends being opt-in rather than default to help with this.
We also follow a method of not giving out our entire stipend amounts until the end of the semester so we can verify that organizers did complete the requirements we asked of them.
I do think organizers shouldn’t expect to be paid for this type of work by default and we are considering not offering stipends in the future (though we are still collecting data on their helpfulness).
Personal anecdote: I worked a few part-time jobs in college and being paid to run my group enabled me to spend my time on what I thought was most impactful and I really appreciated that. However, in my last semester, I didn’t need the funding and opted out of it.
Hi Jessica, if you have time, I’d love to get your thoughts on some of my suggestions to improve university group epistemics via the content of introductory fellowships: https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing?commentId=z7rPNpaqNZPXH2oBb
Hi Dave,
Thanks for taking the time to write this. I had an almost identical experience at my university. I helped re-start the club, with every intention to lead the club, but I am no longer associated with it because of the lack of willingness from others to engage with AI safety criticisms or to challenge their own beliefs regarding AI safety/Existential risk.
I also felt that those in our group who prioritized AI safety had an advantage as far as getting recognition from more senior members of the city group, ability to form connections with other EAs in the club, and to get funding from EA orgs. I was quite certain I could get funding from the CEA too, as long as I lied and said I prioritized AI safety/Existential risk, but I wasn’t willing to do that. I also felt the money given to other organizers in the club was not necessary and did not have any positive outcomes other than for that individual.
I am now basically fully estranged from the club (which sucks, because I actually enjoyed the company of everyone) because I do not feel like my values, and the values I originally became interested in EA for (such as epistemic humility) exist in the space I was in.
I did manage to have a few conversations with people in the club about AI safety that were somewhat productive, and I am grateful for those people (one senior EA community member who works in AI safety in particular). But despite this, our club basically felt like an AI safety club. Almost every student involved (at least the consistent ones, and the president) were AI safety focused. In addition, they were mainly interested in starting AI safety reading groups and most conversations led to AI safety (other than a philosophy group that my partner and I started, but eventually stopped running).
Thanks for writing this. This comment, in connection with Dave’s, reminds me that paying people—especially paying them too much—can compromise their epistemics. Of course, paying people is often a practical necessity for any number of reasons, so I’m not suggesting that EA transforms into a volunteer-only movement.
I’m not talking about grift but something that has insidious onset in the medical sense: slow, subtle, and without the person’s awareness. If one believes that financial incentives matter (and they seemingly must for the theory of change behind paying university organizers to make much sense), it’s important to consider the various ways in which those incentives could lead to bad epistemics for the paid organizer.
If student organizers believe they will be well-funded for promoting AI safety/x-risk much more so than broad-tent EA, we would expect that to influence how they approach their organizing work. Moreover, reduction of cognitive dissonance can be a powerful drive—so the organizer may actually (but subconsciously) start favoring the viewpoint they are emphasizing in order to reduce that dissonance rather than for sound reasons. If a significant number of people filling full-time EA jobs were previously paid student organizers, the cumulative effect of this bias could be significant.
I don’t have a great solution for this given that the funding situation is what it is. However, I would err on the side of paying student organizers too little rather than too much. I speculate that the risk of cognitive dissonance—and any pressure student organizers may feel to take certain positions -- increases to some extent with the amount of money involved. While I don’t have a well-developed opinion on whether to pay student organizers at all, they should not be paid “an outrageous amount of money” as Dave reports.
It seems like a lot of criticism of EA stems from concern about “groupthink” dynamics. At least, that is my read on the main reason Dave dislikes retreats. This is a major concern of mine as well.
I know groups like CEA and Open Phil have encouraged and funded EA criticism. My difficulty is I don’t know where to find that criticism. I suppose the EA forum frequently posts criticisms, but fighting groupthink by reading the forum seems counter productive.
I’ve personally found a lot of benefit in reading Reflective Altruism’s blog.
What I’m saying is, I know EA orgs want to encourage criticism, and good criticisms do exit, but I don’t think orgs have found a great way to disseminate those criticisms yet. I would want criticism dissemination to be more of a focus.
For example, there is an AI Safety reading list an EA group put out. It’s very helpful, but I haven’t seen any substantive criticism linked to in that list, while arguments in favor of longtermism comprise most of the lists.
I’ve only been to a handful of the conferences, but I’ve not seen a “Why to be skeptical of longtermism” talk posted.
Has there been an 80k podcast episode that centers longtermism skepticism before? I know it’s been addressed, but I think I’ve only seen it addressed relatively briefly by people who are longtermists or identify as EA. I’d like to see more guests like the longtermist skeptics at GPI.
I’ve not seen an event centering longtermism/EA criticism put on by my local group. To be fair to the group, I’ve not browsed their events for some time.
The rare occasions I have seen references to longtermism criticism, it’s something like a blog post made by someone who agrees with longtermism but is laying out counter arguments to be rigorous. This is good of them to do, but genuine criticisms from people outside of the community are more valuable and I’d like to see more of them.
Something related to disseminating more criticism, is including more voices from non-EAs. I worry when I see a list of references and it is all EAs. This seems common, even on websites like 80k.
If you’re an animal welfare EA I’d highly recommend joining the wholesome refuge that is the newly minted Impactful Animal Advocacy (IAA).
Website and details here. I volunteered for them at the AVA Summit which I strongly recommend as the premier conference and community-builder for animal welfare-focused EAs. The AVA Summit has some features I have long thought missing from EAGs—namely people arguing in good faith about deep deep disagreements (e.g. why don’t we ever see a panel with prominent longtermist and shorttermist EAs arguing for over an hour straight at EAGs?). There was an entire panel addressing quantification bias which turned into talking about some believing how EA has done more harm than good for the animal advocacy movement… but that people are afraid to speak out against EA given it is a movement that has brought in over 100 million dollars to animal advocacy. Personally I loved there being a space for these kind of discussions.
Also, one of my favourite things about the IAA community is they don’t ignore AI, they take it seriously and try to think about how to get ahead of AI developments to help animals. It is a community where you’ll bump into people who can talk about x-risk and take it seriously, but for whatever reason are prioritizing animals.
People have been having similar thoughts to yours for many years, including myself. Navigating through EA epistemic currents is treacherous. To be sure, so is navigating epistemic currents in lots of other environments, including the “default” environment for most people. But EA is sometimes presented as being “neutral” in certain ways, so it feels jarring to see that it is clearly not.
Nearly everyone I know who has been around EA long enough to do things like run a university group eventually confronts the fact that their beliefs have been shaped socially by the community in ways that are hard to understand, including by people paid to shape your beliefs. It’s challenging to know what to do in light of that. Some people reject EA. Others, like you, take breaks to figure things out more for themselves. And others press on, while trying to course correct some. Many try to create more emotional distance, regardless of what they do. There’s not really an obvious answer, and I don’t feel I’ve figured it fully out myself. All this is to just say: you’re not alone. If you or anyone else reading this wants to talk, I’m here.
Finally, I really like this related post, as well as this comment on it. When I ran the Yale EA in depth fellowship, I assigned it as a reading.
Sorry not to weigh in on the object-level parts about university groups and what you think they should do differently, but as I’ve graduated I’m no longer a community builder so I’m somewhat less interested in weighing in on that.
I’m really glad you chose to make this post and I’m grateful for your presence and insights during our NYC Community Builders gatherings over the past ~half year. I worry about organizers with criticisms leaving the community and the perpetuation of an echo chamber, so I’m happy you not only shared your takes but also are open to resuming involvement after taking the time to learn, reflect, and reprioritize.
Adding to the solutions outlined above, some ideas I have:
• Normalize asking people, “What is the strongest counterargument to the claim you just made?” I think this is particularly important in a university setting, but also helpful in EA and the world at large. A uni professor recently told me one of the biggest recent shifts in their undergrad students has been a fear of steelmanning, lest people incorrectly believe it’s the position they hold. That seems really bad. And it seems like establishing this as a new norm could have helped in many of the situations described in the post, e.g. “What are some reasons someone who knows everything you do might not choose to prioritize AI?” • Greater support for uni students trialing projects through their club, including projects spanning cause areas. You can build skills that cross cause areas while testing your fit and achieving meaningful outcomes in the short-term. Campaign for institutional meat reduction in your school cafeteria and you’ll develop valuable skills for AI governance work as a professional. • Mentorship programs that match uni students with professionals. There are many mentorship programs to model this on and most have managed to avoid any nefariousness or cult vibes. • Restructuring fellowships such that they maintain the copy-paste element that has allowed them to spread while focusing more on tools that can be implemented across domains. I like the suggestion of a writing fellowship. I’m personally hoping to create a fellowship focused on social movement theory and advocacy (hit me up if interested in helping!).
I remember speaking with a few people that were employed doing AI-type EA work (people who appear to have fully devoted their careers to the mainstream narrative of EA-style longtermism). I was a bit surprised that when I asked them “What are the strongest arguments against longtermism” none were able to provide much of an answer. I was perplexed that people who had decided to devote their careers (and lives?) to this particular cause area weren’t able to clearly articulate the main weaknesses/problems.
Part of me interpreted this as “Yeah, that makes sense. I wouldn’t be able to speak about strong arguments against gravity or evolution either, because it seems so clear that this particular framework is correct.” But I also feel some concern if the strongest counterargument is something fairly weak, such as “too many white men” or “what if we should discount future people.”
Mad props for going off anon. Connecting it to your resignation from columbia makes me take you way more seriously and is a cheap way to make the post 1000x more valuable than an anon version.
Hmm, 1000x feels too strong to me, maybe by >100x.
EDIT: lol controversial comment
This is odd to me because I have a couple memories of feeling like sr EAs were not taking me seriously because I was being sloppy in my justification for agreeing with them. Though admittedly one such anecdote was pre-pandemic, and I have a few longstanding reason to expect the post-pandemic community builder industrial complex would not have performed as well as the individuals I’m thinking about.
Can confirm that:
“sr EAs [not taking someone seriously if they were] sloppy in their justification for agreeing with them”
sounds right based on my experience being on both sides of the “meeting senior EAs” equation at various times.
(I don’t think I’ve met Quinn, so this isn’t a comment on anyone’s impression of them or their reasoning)
I think that a very simplified ordering for how to impress/gain status within EA is:
Looking back on my early days interacting with EAs, I generally couldn’t present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.
I’m not sure about what hurdles to overcome if you want EA communities to push towards ‘Agreement sloppily justified’ and ‘Disagreement sloppily justified’ being treated similarly.
I think both things happen in different contexts. (Being socially rewarded just for saying you care about AI Safety, and not being taken seriously because (it seems like) you have not thought it through carefully, that is.)
I dunno, I think it can be the case that being sloppy in reasoning and for having disagreements with your conversational partner are independently penalized, or that there’s an interaction effect between the two.
Especially in quick conversations, I can definitely see times where I’m more attuned to bad (by lights) arguments for (by my lights) wrong conclusions, than bad arguments for what I consider to be right conclusions. This is especially true if “bad arguments for right conclusions” really just means people who don’t actually understand deep arguments paraphrasing better arguments that they’ve heard.
My experience is that it’s more that group leaders & other students in EA groups might reward poor epistemics in this way.
And that when people are being more casual, it ‘fits in’ to say AI risk & people won’t press for reasons in those contexts as much, but would push if you said something unusual.
Agree my experience with senior EAs in the SF Bay was often the opposite–I was pressed to explain why I’m concerned about AI risk & to respond to various counterarguments.
Thanks for making this post. Many commenters are disputing your claim that “Being paid to run a college club is weird”, and I want to describe why I think it is in fact distorting.
One real reason you don’t want to pay the leadership of a college club a notably large amount of money is because you expose yourself to brutal adverse selection: the more you pay above the market rate for a campus job, the more attractive the executive positions are to people who are purely financially motivated rather than motivated by the mission of the club. This is loosely speaking a problem faced by all efforts to hire everywhere, but is usually resolved in a corporate environment through having precise and dispassionate performance evaluation, and the ability to remove people who aren’t acting “aligned”, if you will. I think the lack of mechanisms like this at college org level basically mean this adverse selection problem blows up, and you simply can’t bestow excess money or status on executives without corrupting the org. I saw how miserable college-org politics were in other settings, with a lot less money to go around than EA.
At the core of philanthropic mission is a principal-agent problem. Donors, at some length, need to empower agents to spend their money wisely and maximally efficiently. As an EA contributor, rather than primarily a donor, you are constantly being nudged by your corrupted hardware to justify spending money on luxuries and conveniences. From the outside, it’s very difficult to tell whether the money-spenders are adjusting for this bias, hence why things like Wytham Abbey are controversial. So then as an outsider, seeing that the Columbia EA club pays its executives so much and does things like maintain The Commons basically destroys the club’s credibility in my eyes, and I’d only engage with it (as a former Columbia student and now donor) if it were of a substantially different and more frugal looking character.
I expect I reside farther on the “you should be frugal to signal credibility” axis than a lot of people here, and this comment bleeds into larger criticisms I have of current EA culture, but I thought it was reflected neatly in the dynamic your post describes.
To the best of my knowledge, I don’t think Columbia EA gives out salaries to their “executives.” University group organizers who meet specific requirements (for instance, time invested per week) can independently apply for funding and have to undergo an application and interview process. So, the dynamics you describe in the beginning would be somewhat different because of self-selection effects; there isn’t a bulletin board or a LinkedIn post where these positions are advertised. I say somewhat because I can imagine a situation where a solely money-driven individual gets highly engaged in the club, learns about the Group Organizer Fellowship, applies, and manages to secure funding. However, I don’t expect this to be that likely.
For group funding, at least, there are strict requirements for what money can and cannot be spent on. This is true for most university EA clubs unless they have an independent funding source.
All that said, I agree that “notably large amount[s] of money” for university organizers is not ideal.
The mostly analogous position I can think of is that university chaplains get paid to work with university students to help teach and mentor them.
Chaplains dont raise all of the same concerns here. They generally aren’t getting above-market salaries (either for professional-degree holders generally, or compared to other holders of their degree), and there’s a very large barrier to entry (in the US, often a three-year grad degree costing quite a bit of money). So there’s much less incentive and opportunity for someone to gift into a chaplain position; chaplains tend to be doing it because they really believe in their work.
For what it’s worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven’t observed any of the points you mentioned in my experience with the EA group:
We don’t run intro to EA fellowships because we’re a smaller group. We’re not trying to convert more students to be ‘EA’. We more so focus on supporting whoever’s interested in working on EA-relevant projects (ex: a cheap air purifier, a donations advisory site, a cybersecurity algorithm, etc.). Whether they identify with the EA movement or not.
Since we’re not trying to get people to become EA members, we’re not hosting any discussions where a group organiser could convince people to work on AI safety over all else.
No one’s getting paid here. We have grant money that we’ve used for things like hosting an AI governance hackathon. But that money gets used for things like marketing, catering, prizes, etc. - not salaries.
Which university EA groups specifically did you talk to before proclaiming “University EA Groups Need Fixing”? Based only on what I read in your article, a more accurate title seems to be “Columbia EA Needs Fixing”
I feel it is important to mention that this isn’t supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.
Thanks for writing this, these are important critiques. I think it can be healthy to disengage from EA in order to sort through some of the weird ideas for yourself, without all the social pressures.
A few comments:
I actually don’t think it’s that weird to pay organizers. I know PETA has a student program that pays organizers, and The Humane League once did this too. I’d imagine you can find similar programs in other movements, though I don’t know for sure.
I suspect the amount that EA pays organizers is unusual though, and I strongly agree with you that paying a lot for university organizing introduces weird and epistemically corrosive incentives. The PETA program pays students $60 per event they run, so at most ~$600 per semester. Idk exactly how much EA group leaders are paid, but I think it’s a lot more than that.
I definitely share your sense that EA’s message of “think critically about how to do the most good” can sometimes feel like code for “figure out that we’re right about longtermism so you can work on AI risk.” The free money, retreats etc. can wind up feeling more like bribery than support, even if the intentions are good. I do expect the post-FTX funding crash to help solve some of these problems though.
FWIW I did not care about animals before engaging with EA, and I work on animal welfare now. I take AI risk pretty seriously but have a lot of uncertainty around it.
For UK universities (I see a few have EA clubs) - it is really weird that student volunteers receive individual funding. I think this applies to US as well but can’t be 100% sure:
UK student clubs fall under the banner of their respective student union, which is a charitable organisation to support the needs, interests and development of the students at the university. They have oversight of clubs, and a pot of money that clubs can access (i.e. they submit a budget for their running costs/events for the year and the union decides what is/isn’t reasonable and what it can/can’t fund). They also have a platform to promote all clubs through the union website, Freshers’ week, university brochures, etc.
Some external organisations sponsor clubs. This is usually to make up ‘gaps’ in funding from the union e.g. If a bank wanted to fund a finance club so they can provide free caviar and wine at all events to encourage students to attend, in return for their logo appearing in club newsletters, this makes sense; the union would not be funding the ‘caviar and wine’ line item in the budget as this is not considered essential to supporting the running of the finance club as per the union’s charitable aims (and they have 100s of clubs to fund).
Student clubs do really impressive things and often need support in achieving this e.g. if a club wanted to run a pan-London summer school to support widening access to STEM it’s likely this would be supported by the union and corporate sponsorship to cover costs. They can also access free support/time from student union staff on the operational/finance side. The students wouldn’t be paid to run the club though, and are often recognised for their extraordinary voluntary service in other ways e.g. student awards, etc.
The PETA program you linked to is paying individual students arrange ad-hoc protests etc on campuses, and will pay a reasonably small sum and provide materials to support that. It’s not under a banner of a student club (from what I can tell i.e. not Oxford PETA). It’s different than PETA paying Oxford PETA’s President $600 a semester just for being President. It also isn’t paying someone to set up the club or keep it going year-on-year by ensuring committee members (which should be based on enough students having an interest).
It seems OP was paid in his role as the latter i.e. to run a club at a university (a voluntary position). If so, I share OP’s assessment this is weird—there’s something about it which feels uncomfortable: wondering what people are being paid for, how they are selected, if this is a salary (with a contract and protections) or just a golden handshake, how that changes dynamics within the club, or potential conflicts of interest e.g. being asked to run an EA conference in finals week, how safe a student would feel in declining if they had accepted a large amount of money throughout the year.
I’m curious how much money these roles are attracting and if this changes between universities (even a ballpark figure would help)? I’d be curious to know if this is generally known at universities and what evidence there is this helps—personally, if I was back in Freshers’ Week and I came across an EA stall, it would markedly downgrade my estimation of EA to learn the otherwise ‘volunteer’ positions in other fantastic clubs were paid here i.e. I’m not seeing passionate students devoting their time to a cause they care about, it’s also/mainly a money-making venture (especially with OP’s description of ‘outrageous’). I’m perplexed why this is being done at all—EA could be funding conferences, retreats, etc without this type of ‘weird’, and surely there is sufficient interest within students to not have to pay volunteers.
[You’re not the only person to make this point, so please don’t think I’m challenging you personally as I pretty much agree with your stance on everything except for the ‘weirdness’ - I’ve just seen a few misconceptions in this thread about uni clubs and thought I could clear some up here vs a new comment.]
In Australia it is the norm for student union leaders to be paid a decently large sum along the 20k to 30k range from memory.
The UK has this too. But they are full-time employees, either taking a year off from their studies or in the year after they graduate. Open Phil pays a lot more than this.
Yeah in Australia they don’t really do much having been friends with them.
Open Phil’s University Organizer Fellowship quotes the following ranges which may be useful as a ballpark:
Funding starting in the following ranges for full-time organizers, pro-rated for part-time organizers:
* In the US: $45,000 – $80,000 per year for undergraduates, and $60,000 – $95,000 per year for non-undergraduates (including those no longer at university).
In the UK: £31,800 – £47,800 per year for undergraduates, and £35,800 – £55,900 per year for non-undergraduates.
Funding amounts in other countries will be set according to cost-of-living and other location-specific factors. Exact funding amounts will depend on a number of factors, including city-specific cost-of-living, role, track record, and university.
Most grantees are “working 15 hours per week or less.”
For context, a UK graduate at their first job at a top 100 employer earns around £30,000 per year, which is pretty close to the national median salary. So these are well-paying jobs.
It’s always wild to me that English-speaking countries with seemingly competent people (like the UK and Singapore) pay their programmers 1⁄2 or less that of programmers in America. I still don’t understand the economics behind that.
As in, paying UK undergrads ~£50/hr (assuming they work 15 hours all year round, including in the very lengthy university holidays)? (!) Or am I missing something here?
It is “pro-rated for part-time organizers,” and most are part-time. In the US, proration is commonly done off of around 2000 hrs/year for full time, but I don’t know how Open Phil does it.
It’s a similar situation with at least some universities in Australia, with the added complication that club office-holders are elected by club members, so no conventional application process is allowed, and there’s always the chance that a random, non-CEA-vetted member will turn up and manage to win the election.
+1 to the the amount of money being really high relative to other clubs (and—importantly—other on campus jobs).
At my college (Haverford College, small liberal arts in the US) the only “club” that was paid (to my knowledge) was the environmental committee, and this was because 1) it was a committee which liaised with the other offices on campus (e.g. president’s office, arboretum, faculty) and it existed because 2) it was funded by an independent donor.
Only the org leaders were compensated and this was at the college-wide student rate of between $9-10 (depending on your work experience).
I don’t think $10/hour is a reasonable wage to pay anyone and other unis probably have a higher wages ($15, possibly higher?), but it gives you a sense of the discrepancy in pay for on-campus jobs and student organizers.
I think it’s reasonable to pay students higher wages during the summer where some students may have competitive offers from for-profit companies. I’d weight it higher if they are needs based (e.g. some schools like Haverford have a mandatory co-pay of ~$2500 a year, which many students earn during the summer).
Is it actually bad if AI, longtermism, or x-risk are dominant in EA? That seems to crucially depend on whether these cause areas are actually the ones in which the most good can be done—and whether we should believe that depends on how strong arguments back up these cause areas. Assume, for example, that we can do by far the most good by focusing on AI x-risks and that there is an excellent case / compelling arguments for this. Then, this cause area should receive significantly more resources and should be much more talked about, and promoted, than other cause areas. Treating it just like other cause areas would be a big mistake: the (assumed) fact that we can do much more good in this cause area is a great reason to treat it differently!
To be clear: my point is not that AI, longtermism, or anything else should be dominant in EA, but that how these cause areas should be represented in EA (including whether they should be dominant) depends on the object-level discourse about their cost-effectiveness. It is therefore unobvious, and depends on difficult object-level questions, whether a given degree of dominance of AI, longtermism, or any other cause area, is justified or not. (I take this to be in tension with some points of the post, and some of the comments, but not as incompatible with most of its points.)
I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn’t we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?
Different people in EA define ‘good’ in different ways. You can argue that some cause is better for some family of definitions, but the aim is, I think, to help people with different definitions too achieve the goal.
You say “if by far the most good can be done by this allocation and there are sound public arguments for this conclusion”, but the idea of ‘sound public arguments’ is tricky. We’re not scientists with some very-well-tested models. You’re never going to have arguments which are conclusive enough to shut down other causes, even if it sometimes seems to some people here that they do.
In my view, the comment isn’t particularly responsive to the post. I take the post’s main critique as being something like: groups present themselves as devoted to EA as a question and to helping participants find their own path in EA, but in practice steer participants heavily toward certain approved conclusions.
That critique is not inconsistent with “EA resources should be focused on AI and longtermism,” or maybe even “EA funding for university groups should concentrate on x-risk/AI groups that don’t present themselves to be full-spectrum EA groups.”
Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?
Sorry to hear that you’ve had this experience.
I think you’ve raised a really important point—in practice, cause prioritisation by individual EAs is heavily irrational, and is shaped by social dynamics, groupthink and deference to people who don’t want people to be deferring to them. Eliminating this irrationality entirely is impossible, but we can still try to minimise it.
I think one problem we have is that it’s true that cause prioritisation by orgs like 80000 Hours is more rational than many other communities aiming to make the world a better place. However, the bar here is extremely low, and I think some EAs (especially new EAs) see cause prioritisation by 80000 Hours as 100% rational. I think a better framing is to see their cause prioritisation as less irrational.
As someone who is not very involved with EA socially because of where I live, I’d also like to add that from the outside, there seems to be fairly strong, widespread consensus that EAs think AI Safety is the more important cause area. But then I’ve found that when I meet “core EAs”, eg—people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I’d expect, and this consensus does not seem to be present. I’m not sure why this discrepancy exists and I’m not sure how this could be fixed—maybe staff at these orgs could publish their “cause ranking” lists.
Some of my suggestions for all EA organisers and CEA to improve epistemics and cause prioritisation via intro fellowships and Arete fellowships:
Discuss this thought experiment to better emphasise uncertainty in cause prioritisation, to encourage more independent cause prioritisation, and discourage deference. a) “Imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different timelines?” and b) “Imagine effective altruism independently emerged in 100 different countries and these movements could not contact each other. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different countries?”
Discuss specific, unavoidable philosophical problems with cause prioritisation. This includes a) the effects of defining problems more narrowly or more broadly on “pressingness”, b) the fact that cause prioritisation is used to identify impactful interventions, which is not ideal, and probably other problems that I can’t think of off the top of my head.
Make new EAs aware of the Big List of Cause Candidates post, and the concept of Cause X.
When encouraging EA’s to get involved with the community, discuss the risk of optimising for social status instead of collective impact.
At the end of Arete Fellowships / an EA Intro Fellowships, show fellows data from the EA Surveys (particularly cause prioritisation survey data) to give them a more evidence-based sense of what the community actually thinks about things.
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil’s funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
Hey—thanks for the suggestions!
I work on the Virtual Programs team at CEA, and we’re actually thinking of making some updates to the handbook in the coming months. I’ve noted down your recommendations and we’ll definitely consider adding some of the resources you shared. In particular, I’d be excited to add the empirical data point about cause prio, and maybe something discussing deference and groupthink dynamics.
I do want to mention that some of these resources, or similar ones, already exist within the EA Handbook intro curriculum. To note a few:
- Moral Progress & Cause X, Week 3
- Crucial Conversations, Week 4 (I think this gets at some similar ideas, although not exactly the same content as anything you listed)
- Big List of Cause Candidates, Week 7
Also I want to mention that while we are taking another look at the curriculum—and we will apply this lens when we do—my guess is that a lot of the issue here (as you point out!) actually happens through interpersonal dynamics, and is not informed by the curriculum itself, and hence requires different solutions.
One data point to add in support: I once spoke to a relatively new EA who was part of a uni group who said they “should” believe that longtermism/AI safety is the top cause, but when I asked them what their actual prio was said it was mental health.
By “their actual prio”, which of these do you think they meant (if any)?
The area where they could personally do the most good with their work
The area that should absorb the highest fraction of EA-interested people, because it has the most strong opportunities to do good
The area they personally cared most about, to the point where it would feel wrong to answer otherwise (even if they technically thought they could do more good in other areas)
I’ve sometimes had three different areas in mind for these three categories, and have struggled to talk about my own priorities as a result.
A combination of one and three, but hard to say exactly the boundaries. E.g. I think they thought it was the best cause area for themselves (and maybe people in their country) but not everyone globally or something.
I think they may not have really thought about two in-depth, because of the feeling that they “should” care about one and prioritize it, and appeared somewhat guilty or hesitant to share their actual views because they thought they would be judged. They mentioned having spoken to a bunch of others and feeling like that was what everyone else was saying.
It’s possible they did think two though (it was a few years ago, so I’m not sure).
First, I am sorry to hear about your experience. I am sympathetic to the idea that a high level of deference and lack of rigorous thinking is likely rampant amongst the university EA crowd, and I hope this is remedied. That said, I strongly disagree with your takeaways about funding and have some other reflections as well:
“Being paid to run a college club is weird. All other college students volunteer to run their clubs.”
This seems incorrect. I used to feel this way, but I changed my mind because I noticed that every “serious” club (i.e., any club wanting to achieve its goals reliably) on my campus pays students or hires paid interns. For instance, my university has a well-established environmental science ecosystem, and at least two of the associated clubs are supported via some university funding mechanism (this is now so advanced that they also do grantmaking for student projects ranging from a couple thousand to a max of $100,000). I can also think of a few larger Christian groups on campus which do the same. Some computer science/data-related clubs also do this, but I might be wrong.
Most college clubs are indeed run on a volunteer basis. But most are run quite casually. There is nothing wrong with this; most of them are hobby-based clubs where students simply want to create a socially welcoming atmosphere for any who might be interested. They don’t have weekly discussions, TA-like facilitation, socials/retreats, or, in some cases, hosting research/internship programs. In this way, EA clubs are different because they aren’t trying to be the “let’s get together and have fun” club. I almost see university EA clubs as a prototype non-profit or a so-so-funded university department trying to run a few courses.
In passing I should also mention that it is far more common for clubs to get funding for hosting events, outreach, buying materials, etc. My guess is that in these cases if more funding were available, then students running those clubs would also get stipends.
“Getting paid to organize did not make me take my role more seriously, and I suspect that other organizers did not take their roles much more seriously because of being paid.”
My experience has been the opposite of yours. Before getting paid, organizing felt like a distraction from more important things; there was always this rush to wrap up tasks; I enjoyed organizing but always felt somewhat guilty for spending time on it. These feelings vanished after getting funded. I (at least) doubled the amount of time I spent on the club, got more exposed to EA, got more organized with the meetings/deadlines, and I feel that I have a sense of responsibility for running this project the best I can.
Turn the University Group Organizer Fellowship into a need-based fellowship.
I am uncertain about this. I think a better and simpler heuristic is that if people are working diligently for x hours a week, then they should be funded for their labor.
“If the University Group Organizer Fellowship exit survey indicates that funding was somewhat helpful in increasing people’s commitment to quality community building, then reduce funding...”
I agree with this. Funding being given out could be somewhat reduced and I feel it would be equally as impactful as it is now, but I am keen to see the results of the survey.
“I am very concerned with just how little cause prioritization seems to be happening at my university group.”
At least for university groups, maybe this is the wrong thing to be concerned about. It would be better if students could do rigorous cause-prioritization, but I think for most, this would be quite challenging, if not virtually impossible.
The way I see it, most university students are still in the formative stages of figuring out what they believe in and their reasons for doing so. Almost all are in the active process of developing their identity and goals. Some have certain behavioral traits that prevent them from exploring all options (think of the shy person who later went on to become a communicator of some sort). All this is sometimes exacerbated by mental health problems or practical concerns (familial duties, the need to be financially stable, etc.).
Expecting folks from this age group to perform cause prioritization is a high bar. I am sure some can do it, but I wouldn’t have been able to. Instead, I think it’d be better if university EA groups helped their members understand how to make the best possible bet at the moment to have a pathway to impact. For instance, I hope that most students who go through the fellowship:
— Develop better ways of thinking and forming opinions
— Be more open-minded / have a broad sphere of concern
— Take ideas seriously and act on them (usually by building career capital)
— Play the long game of having a high-impact career
Now, this likely doesn’t happen to the best possible degree. But I think that all this and more, in combination, would help most in refining their cause prioritization over the years and setting themselves up to have a rewarding and impactful career.
Maybe this is what you meant when you were expressing your concerns, in which case, sorry for the TED talk and I wholeheartedly agree.
I don’t think most people should be doing cause prioritisation with 80000 Hours’s level of rigour, but I think everyone is capable of doing some sort of cause prioritisation—at least working out where there values may differ with those of 80000 Hours, or identifying where they disagree with some of 80K’s claims and working out how that would affect how they rank causes.
I agree. I was imagining too rigorous (and narrow) of a cause prioritization exercise when commenting.
I agree with all of this up until the cause prioritisation part. I’m confused about why you think it would be a mental health concern?
There’s a very big space of options between feeling like there’s only one socially valid opinion about a cause area and feeling like you have to do a rigorous piece of analysis of all causes in 6 weeks. I gather the OP wants something that’s more just an extension of ‘developing better ways of thinking and forming opinions’ about causes, and not quashing people’s organic critical reflections about the ideas they encounter.
Surely we want more analytical people who can think clearly and are net contributors to important intellectual debates in EA, rather than people who just jump on bandwagons and regurgitate consensus arguments.
I don’t! I meant to say that students who have mental health concerns may find it harder to do cause prioritization while balancing everything else.
I was unsure if this is what OP meant; if yes, then I fully agree.
Your description of retreats matches my experience almost disconcertingly; it even described things I didn’t even realize I took away from the retreat. I went to I felt like the only one who had those experiences. Thanks for writing this up. I hope things work out for you!
I’ve heard this critique in different places and never really understood it. Presumably undergraduates who have only recently heard of the empirical and philosophical work related to cause prioritization are not in the best position to do original work on it. Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship. It’s not surprising to me that most people converge on the most popular position within the broader movement.
IMO there’s a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you. I think most people probably shouldn’t spend lots of time doing cause prio from scratch, but I do think most people should judge the existing cause prio literature on object level and judge them from the best of the ability.
My read of the sentence indicated that there was too much deferring and not enough thinking through the arguments oneself.
Of course. I just think evaluating and deferring can look quite similar (and a mix of the two is usually taking place).
OP seems to believe students are deferring because of other frustrations. As many have quoted: “If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong”.
I’ve attended Arete seminars at Ivy League universities and seen what looked liked fairly sophisticated evaluation to me.
I’d say that critically examining arguments in cause prioritization is an important part of doing cause prioritization. Just as examining philosophical arguments of others is part of doing philosophy. At least, reviewing and judging arguments does not amount to deferring—which is what the post seems mainly concerned about. Perhaps there is actually no disagreement?
Thank you for the post, as a new uni group organizer I’ll take this into account.
I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an “intro” fellowship but the program discusses longtermism/x-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:
First, as Dave mentioned, some people may want to do good as much as possible but don’t buy longtermism. We might lose these people who could do amazing good.
Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the “most important century” narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.
This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weird framework from the start, I may have been scared and taken a step back from EA because of the overwhelming weirdness and cultish vibes
Maybe If we slow down the creation of “ai safety people” by cutting the fellowship to 6 weeks and maybe offering a 2-week additional track program for people who are interested in longtermism. Or by just giving them resources, having 1:1s or taking them to in-depth programs.
I disagree-voted and briefly wanted to explain why.
“some people may want to do good as much as possible but don’t buy longtermism. We might lose these people who could do amazing good.”
I agree that University groups should feel welcoming to those interested in non-longtermist causes, but it is perfectly possible to create this atmosphere without nixing key parts of the syllabus. I don’t think the syllabus has much to do with creating this atmosphere. Rockwell and freedomandutility (and others) have listed some great points on this, and I think the conversations you have (and how you have them) and the opportunities you share with your group could help folks be more cause-neutral.
One idea I liked was the “local expert” model where you have members deeply exploring various cause areas. When there is a new member interested in cause X, you can simply redirect them to the member who has studied it or done internships related to that cause. If you have different “experts” spanning different areas, this could help maintain a broad range of interests in the club and feel welcoming to a broader range of newcomers.
“And if we give this content of weirdness plus the “most important century” narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion.”
I think assumes that people won’t be put off by the weirdness by, let’s say, week 1 or week 3. I could see situations where people would find caring about animals weirder than caring about future humans. Or both of these weirder than pandemic prevention or global poverty reduction. I don’t know what the solution is, except reminding people to be open-minded + critical as they go through the reading, and cultivating an environment where people understand that they don’t have to agree with everything to be a part of the club.
Host of other reasons that I will quickly mention:
I don’t think those three weeks of the syllabus you mention disproportionately represent a single framework: One can care about x-risk without caring about longtermism or vice-versa or both. There are other non-AI x-risks and longtermist causes that folks might be interested in, so I don’t think it is there just to generate more interest in AI Safety.
Internally, we (group organizers at my university) did feel the AI week was a bit much, so we made the career-related readings on AI optional. The logic was that people should learn about, for instance, why AI alignment could be hard with modern deep learning, but they don’t need to read the 80K career profile on Safety if they don’t want to. We added readings on s-risks, and are considering adding pieces on AI welfare (undecided right now).
It is more honest to have those readings in the introductory syllabus: New members could be weirded out to see x-risk/longtermist/AI jobs on 80K or the EA Opportunity board and question why those topics weren’t introduced in the Introductory Program.
I was also primarily interested in animal advocacy prior to EA, and now I am interested in a broader range of issues while maintaining (and refining) my interest in animal advocacy. I am now also disinterested in some causes I initially thought were as important. I think having an introductory syllabus with a broad range of ideas is important for such cross-pollination/updating and a more robust career planning process down the line.
Anecdote: One of the comments that comes up in our group sometimes is that we focus too much on charities as a way of doing good (the first few weeks on cost-effectiveness, global health, donations, etc.). So, having a week on x-risk and sharing the message that “hey, you can also work for the government, help shape policy on bio-risks, and have a huge impact” is an important one not to leave out.
First, I’m sorry you’ve had this bad experience. I’m wary of creating environments that put a lot of pressure on young people to come to particular conclusions, and I’m bothered when AI Safety recruitment takes place in more isolated environments that minimize inferential distance because it means new people are not figuring it out for themselves.
I relate a lot to the feeling that AI Safety invaded as a cause without having to prove itself in a lot of the ways the other causes had to rigorously prove impact. No doubt it’s the highest prestige cause and attractive to think about (math, computer science, speculating about ginormous longterm impact) in many ways that global health or animal welfare stuff is often not. (You can even basically work on AI capabilities at a big fancy company while getting credit from EAs for doing the most important altruism in the world! There’s nothing like that for the other causes.)
Although I have my own ideas about some bad epistemics going on with prioritizing AI Safety, I want to hear your thoughts about it spelled out more. Is it mainly the deference you’re talking about?
Thanks so much for sharing your thoughts and reasons for disillusionment. I found this section the most concerning. If this has even a moderate amount of truth to it (especially the bit about discouraging new potential near termist EAs) then these kind of fellowships might need serious rethinking.
“Once the fellowship is over, the people who stick around are those who were sold on the ideas espoused in weeks 4, 5, and 6 (existential risks, longtermism, and AI) either because their facilitators were passionate about those topics, they were tech bros, or they were inclined to those ideas due to social pressure or emotional appeal. The folks who were intrigued by weeks 1, 2, and 3 (animal welfare, global health, and cost-effectiveness) but dismissed longtermism, x-risks, or AI safety may (mistakenly) think there is no place for them in EA. Over time, the EA group continues to select for people with those values, and before you know it your EA group is now a factory that churns out x-risk reducers, longtermists, and AI safety prioritizers.”
Thanks so much for writing this post Dave; I find this really helpful for pinning down some of the perceived and real issues with the EA community.
I think some people have two stable equilibria: one being ~“do normal things” and the other being “take ideas seriously” (obviously an oversimplification). I think getting from the former to the latter often requires some pressure, but the latter can be inhabited without sacrificing good epistemics and can be much more impactful. Plus, people who make this transition often end up grateful that they made it, and wish they’d made it earlier. I think other people basically don’t have these two stable equilibria, but some of those have an unstable equilibrium for taking ideas seriously which is epistemically unsound, and it becomes stable through social dynamics rather than by thinking through the ideas carefully, which is bad… but also potentially good for the world if they can do good work despite the unsound epistemic foundation… This is messy and I don’t straightforwardly endorse it, but I also can’t honestly say that it’s obvious to me we should always prioritize pure epistemic health if it trades off against impact here. Reducing “the kind of outreach and social pressure that harms epistemic health” might also reduce the number of both kinds of people who take ideas seriously. Maybe there is no tradeoff; maybe this is ultimately bad from an impact perspective too, or maybe there’s a way to get the good without the bad. But that’s not clear to me, and I would love to hear anyone’s suggestions.
(The stable vs. unstable equilibrium concept isn’t described exactly right, but I think the point is clear.)
I’m not really an EA, but EA-adjacent. I am quite concerned about AI safety, and think it’s probably the most important problem we’re dealing with right now.
It sounds like your post is trying to point out some general issues in EA university groups, and you do point out specific dynamics that one can reasonably be concerned about. It does seem, however, like you do have an issue with the predominance of concerns around AI that is separate from this issue and that strongly shines through in the post. I find this dilutes your message and it might be better separated from the rest of your post.
To counter this, I’m also worried about AI safety despite having mostly withdrawn from EA, but I think the EA focus and discussion on AI safety is weird and bad, and people in EA get sold on specific ideas way too easily. Some examples for ideas that are common but I believe to be very shoddy: “most important century”, “automatic doom from AGI”, “AGI is likely to be developed in the next decade”, “AGI would create superintelligence”.
What are your reasons for being worried?
More simplistic ones. Machines are getting smarter and more complex and have the potential to surpass humans in intelligence, in the sense of being able to do the things we can do or harder things we haven’t cracked yet, all the while having a vast advantage in computing power and speed. Stories we invent about how machines can get out of control are often weird and require them to ‘think outside the box’ and reason about themselves—but since we ourselves can do it, there’s no reason a machine couldn’t. All of this, together with the perils of maximization.
The thing is, every part of this might or might not happen. Machine intelligence may remain too narrow to do any of this. Or may not decide to break out of its cage. Or we may find ways to contain it by the time any of this happens. Given the current state of AI, I strongly think none of this will happen soon.
Mostly agree, though maybe not with the last sentence on certain readings (i.e. I’m “only” 95% confident we won’t have human-like agents by 2032, not 99.9% confident.) But strong agreement on the basic “hey intelligent agents could be dangerous, humans are”, being much more convincing than detailed AI doomer stuff.
You’re not really countering me! It’s very easy to imagine that group dynamics like this get out of hand, and people tend repeat certain talking points without due consideration. But if your problem is bad discourse around an issue, it would be better to present that separately from your personal opinions on the issue itself.
I don’t think the two issues are separate. The bad dynamics and discourse in EA are heavily intertwined with the ubiquity of weakly supported but widely held ideas, many of which fuel the AI safety focus of the community. The subgroups of the community where these dynamics are worst are exactly those where AI safety as a cause area is the most popular.
Hello,
I am sorry that this was your experience in your university group. I would also like to thank you for being bold and sharing your concerns because it will help make necessary changes to various groups who are having the same experience. This kind of effort is important because it will keep the priorities, actions and overall efforts of E.A groups in check.
There are some actions that my university facilitator took to help people “think better” about issues they are particular interested in and fall under the EA umbrellas (or would make the world a better place).
He held 1 on 1s with each member to try and access interests in various course areas. He also assessed if the EA community has been of help to each individual in helping us think better, and encouraged us to air any concerns we might have such as the issues you have written on your post.
He held discussions with the group members individually to try and establish if the course they are undertaking at the university is something they are interested in or they just landed on it which would mean their interested in another career path or interests that would help others in line with the E.A goals.
He also guided us on the best way to cause prioritize and how to best use E.A and the resources available to make the most out of our efforts to make the world a better place.
He encouraged us to be vocal about cause areas we thing that are neglected and more efforts should be put into. We also have general conversations within this space. Having conversations with likeminded individuals in the line of the E.A goals helped us foster growth and think better.
After each session where we discuss the week’s content, we take time and discuss our personal (E.A related) ideas and learn in a safe space where we can ask any question we have. Honestly helped me see various issue in different perspectives and in turn helped me start to think of cause areas that are a best fit for me.
In my own view, we should commend the organizers and other stakeholders involved in A.I safety. The movement has become a success and they are on the right path to prevent A.I related risks in the future despite the failures you have highlighted based on your experience. We should push other E.A related movements to have as much traction as A.I safety. However, we should not just push people towards any of the E.A related organization without cause prioritization.
I agree with various comments on how the A.I safety (and other high impact cause areas) issue should have their individual movements to give E.A space to be its own thing. This will give individuals an opportunity to join such high impact organizations of their choice based on their interests and fitness. In turn E.A will only focus on teaching people how to “find the best ways to help others, and put them into practice.” both as a research field and a practical community.
Regarding payment to organizers, I think that this helps them spend time doing what is most impactful. I think paying the organizers helps them by motivating them to continue volunteering and fostering growth of the group. Funding the organizer would make a difference between them taking more time organizing the club versus them looking for a part time jobs which would take more of their time. I believe that if an organizer is in a stable financially, they should decline the payment and see it directed towards another cause area. This will definitely depend on how altruistic a person is.
As someone that organizes and is in touch with a various EA/AI safety groups, I can definitely see where you’re coming from! I think many of the concerns here boil down to group culture and social dynamics that could be irrespective of what cause areas people in the group end up focusing on.
You could imagine two communities whose members in practice work on very similar things, but whose culture couldn’t be further apart:
Intellectually isolated community where longtermism/AI safety being of utmost importance is seen as self-evident. There are social dynamics that discourage certain beliefs and questions, including about said social dynamics. Comes across as groupthinky/culty to anyone that isn’t immediately on-board.
Epistemically humble community that tries to figure out what the most impactful projects are to improve the world, a large fraction of which have tentatively come to the conclusion that AI safety appears very pressing and have subsequently decided to work on this cause area. People are self-aware of the tower of assumptions underlying this conclusion. Social dynamics of the group can be openly discussed. Comes across as truth-seeking.
I think it’s possible for some groups to embody the culture of the latter example more, and to do so without necessarily focusing any less on longtermism and AI safety.
Sorry to hear that you had such a rough experience.
I agree that there can be downsides of being too trapped within an EA bubble and it seems worthwhile suggesting to people that after spending some extended time in the bay, they may benefit from getting away from it for a bit.
Regarding retreats, I think it can be beneficial for facilitators to try to act similarly to philosophy lecturers who are there to ensure you understand the arguments for and against more than trying to get you to agree with them.
I also think that it would be possible to create an alternate curriculum that encouraged more debate, perhaps by pairing up articles with others arguing the opposite position. However, producing such a resource would be a lot of work.
Kudos to you for having the courage to write this post. One of the things I like most about it is the uncanny understanding and acknowledgement of how people feel when they are trying to enter a new social group. EAs tend to focus on logic and rationality but humans are still emotional beings. I think perhaps we may underrate how these feelings drive our behavior. I didn’t know that university organizers were paid—that, to me, seems kind of insane and counter to the spirit of altruism. I really like the idea of making it need based. One other thing your post made me reflect on is how community-building strategies and epistemics may differ at less-selective versus highly-selective universities. Your experience is at highly-selective schools but I’m curious how many EA groups there are at regional comprehensive universities and open access schools, or HBCUs, community colleges, tribal colleges, etc. Those student populations are very different but may bring valuable perspectives to EA if effort is made to engage them.
Thank you for taking the time to write this. In 2020, I had the opportunity to start a city group or a university group in Cyprus, given the resources and connections at my disposal. After thinking long and hard about the homogenization of the group towards a certain cause area, I opted not to, but rather focused on being a facilitator for the virtual program, where I believe I will have more impact by introducing EA to newcomers from a more nuanced perspective. Facilitators of the virtual program have the ability to maintain a perfect balance between cause areas and the inability of certain cause areas to dominate due to the lack of any external pressure. I find it a much better use of my time and efforts. With no social hierarchy or community to impress in the virtual program, it is often clear and easy for people to defend their epistemic beliefs and cause area prioritization without any external factors or social pressure. I also try my best to mention to cohorts the need for personal fit in choosing cause areas, not just on the basis of groupthink or potential social currency. Perhaps this is where university fellowships and the virtual program diverge (no external pressure). Cohorts are often nudged to think deeply about positions and ideas that they hold without feeling any pressure. Some end up working on AI safety; others end up working on different areas such as climate change and biorisk; and some end up joining CE to develop new charity ideas. I find this highly fulfilling. I don’t necessarily think the syllabus from CEA nudges people towards AI safety (often times it serves as a good resource for cohorts who joined the program through animal warfare, global health, and poverty) to learn about other EA cause areas and also compare their epistemics and personal fit on how to tackle pressing issues. I do often get asked tough questions about why the community seems too focused on AI safety, especially from cohorts who I have nudged to attend EAGX and EAGs. I do point out the massive nuances in funding for different areas within EA because it is far too easy for some cohorts to develop the idea that EA is one huge AI-based movement.
Given the different dynamics between the virtual program and university groups, I do sympathize with you.
Thank you for writing this. It puts some of the questions I get asked during the fellowship into a better perspective, especially coming from a university organizer.
@Lizka Apologies is this was raised and answered elsewhere, but I just noticed in relation to this article that your reading estimate says 12 minutes, but when I press listen to this article it says 19 minutes at normal speed? Is there a reason for the discrepancy? How is the reading time calculated?
Also, when I tried to look for who else from the Forum team to tag—I don’t find any obvious page/link that lists the current team members. How can I find this in the future?
Most people can read faster than they can talk, right? So 60% longer for the audio version than the predicted reading time seems reasonable to me?
“The moderation team
The current moderators (as of July 2023) are Lorenzo Buonanno, Victoria Brook, Will Aldred, Francis Burke, JP Addison, and Lizka Vaintrob (we will likely grow the team in the near future). Julia Wise, Ollie Base, Edo Arad, Ben West, and Aaron Gertler are on the moderation team as active advisors. The moderation team uses the email address forum-moderation@effectivealtruism.org. Please feel free to contact us with questions or feedback.”
https://forum.effectivealtruism.org/posts/yND9aGJgobm5dEXqF/guide-to-norms-on-the-forum#The_moderation_team
And there is also the online team:
https://www.centreforeffectivealtruism.org/team/
For questions like this I would use the intercom, read here how the team wants to get in contact:
https://forum.effectivealtruism.org/contact
I don’t know the formula, but I think the reading time looks at the number of words and estimates how long someone would need to read this much text.
“The general adult population read 150 – 250 words per minute, while adults with college education read 200 – 300 words per minute. However, on average, adults read around 250 words per minute.”
https://www.linkedin.com/pulse/how-fast-considered-speed-reading-quick-facts-paul-nowak
This text has 3037 words. 3037 / 250 = 12,15 min.
”In the English language, people speak about 140 words per minute. A fast speaker will get to 170 words per minute, a slow speaker will use around 110 words.”
https://debatrix.com/en/speech-calculator/
3037 / 140 = 21,70 min
The AI finished reading this post at 18:50 with the outro left. So we have 18,83 min.
3037 / 18,83 = 161,29 words per minute
The AI voice speaks slightly faster than the average human.
Does this answer your question?