Bad Omens in Current Community Building
Community building has recently had a surge in energy and resources. It scales well, it’s high leverage, and you can get involved in it even if six months ago you’d never heard of Effective Altruism. Having seen how hard some people are rowing this boat, I’d like to see if I can’t steer it a bit.
The tl;dr is:
Some current approaches to community building especially in student groups are driving away great people
These approaches involve optimising for engaging a lot of new people in a way that undermines good epistemics, and trades off against other goals which are harder to measure but equally important (cf Goodhart’s Law)
I argue that this is a top priority because the people that these approaches drive away are in many cases the people EA needs the most.
I’ll start by describing why I and several friends of mine did not become EAs.
Then I’ll lay out my sense of what EA needs and what university community building is trying to achieve. I’ll also discuss things that I have encountered in community building that I’ve found troubling.
After that I’ll give my model for how these approaches to community building might be causing serious problems
Finally I’ll explain why I think that this issue in particular needs to be prioritised, and what I think could be done about it.
Part 1 - Reasons I and others did not become an EA
1
I was talking to a friend a little while ago who went to an EA intro talk and is now doing one of 80,000 Hours’ recommended career paths, with a top score for direct impact. She’s also one of the most charismatic people I know, and she cares deeply about doing good, with a healthy practical streak.
She’s not an EA, and she’s not going to be. She told me that she likes the concept and the framing, and that since the intro talk she’s often found that when faced with big ethical questions it’s useful to ask “what would an EA do”. But she’s not an EA. Off the back of the intro talk and the general reputation as she perceived it, she got the sense that EA was a bit totalising, like she couldn’t really half-join, so she didn’t. Still, she enjoys discussing the concept of it with me, and she’s curious to hear more about AI.
Certainly there are some professions, like AI safety, where one person going all in is strikingly better than a lot of people who are only partly engaged, but in her area I don’t think this applies. I’ll build on this later.
2
A friend of mine at a different university attended the EA intro fellowship and found it lacking. He tells me that in the first session, foundational arguments were laid out, and he was encouraged to offer criticism. So he did. According to him, the organisers were grateful for the criticism, but didn’t really give him any satisfying replies. They then proceeded to build on the claims about which he remained unconvinced, without ever returning to it or making an effort to find an answer themselves.
He recently described something to me as ‘too EA’. When I pushed him to elaborate, what he meant was something like ‘has the appearance of inviting you to make your own choice but is not-so-subtly trying to push you in a specific direction’.
3
Another friend of mine is currently working on Bayesian statistical inference, but has an offer to work as a quantitative trader. He hopes to donate some of his income to charity. He does not want to donate to EA causes, or follow EA recommendations, and in fact he will pretty freely describe EA as a cult. He has not, as far as I know, attended any EA events. He has already made his mind up.
As far as I can tell, this is the folk wisdom among mathematicians in my university: I’ve heard the rough sentiment expressed several times, usually in response to people saying things like “so what do you guys make of EA?”
4
I have a friend who has just started a career in an EA cause area. She knows about EA because I have told her about it, and because I once gave her a copy of The Precipice. But there’s never really been a way for her to get engaged. Her area of interest is distinctly neartermist, and even though she lives in one of the most densely EA cities in the world, she’s never become aware of any EA events in her area.
Me
When I came to university I had already read a lot of the Sequences and I’d known about effective altruism for years, and even read some of the advice on 80,000 Hours. But upon investigating my local group I was immediately massively put off. The group advertised that I could easily book a time to go on a walk with a committee member who would talk to me about effective altruism and give me career advice, and to me this felt off. Every student society was trying to project warm inviting friendliness, but EA specifically seemed to be trying too hard, and it pattern-matched to things like student religious groups.
I asked around, and quickly stumbled upon some people who confidently told me that EA was an organisation that wanted to trick me into signing away my future income to them in exchange for being part of their gang. The fact that anyone would confidently claim this was enough to completely dissuade me from ever engaging.
Nonetheless, I retained a general interest in the area, and indeed my interest in rationality led me to get to know various older engaged EAs. They never tried to convince me to adopt their values, but they were pretty exemplary in their epistemics, and this made them very interesting to talk to. The groups I floated in were a mix of EAs and non-EAs, but eventually it rubbed off on me. And I’m pretty sure that if I hadn’t encountered EA in university it would have rubbed off a lot sooner.
Part 2 - My concerns with current community building approaches
I have a tentative model for how EA community building could be improved, which I’ve arrived at from a synthesis of two things. The first is my received sense of where EA is currently facing difficulty; the second is things that I have personally found concerning. I’ll lay these out, then present my best guess for what is going wrong in the next section.
Where is EA facing difficulty
As far as I can tell, the most basic account is that EA is talent-constrained: there aren’t enough good people ready to go out there and do things. This yields the most basic account of what EA should be doing: producing more Highly Engaged EAs (HEAs)
But the picture is slightly more complex than that, because in fact there are only some kinds of talent on which EA is constrained. Indeed, openings for EA jobs tend to be massively oversubscribed. So what specific kinds of talented people does EA need more of? Well, the most obvious place to look is the most recent Leader Forum, which gives the following talent gaps (in order):
Government and policy experts
Management
The ability to really figure out what matters most and set the right priorities
Skills related to entrepreneurship / founding new organizations
One-on-one social skills and emotional intelligence
Machine learning / AI technical expertise
As you can see, there are in fact five categories which rank above AI technical expertise. So the question is, if many EA jobs are flooded with applicants, why are we still having trouble with these? What I will go on to claim is that current community building may be selecting against people with some of these talents.
What I have found disconcerting
The most concrete thing is community builders acting in manners which I would consider to be too overtly geared at conversion. For instance, introducing people to EA by reading prepared scripts, and keeping track of students in CRMs. I find this very aversive, and I would guess that a lot of likely candidates for EA entrepreneurs, governmental officials, and creative types would feel similarly.
This point bears repeating because as far as I can tell a lot of community builders just don’t think this is weird. They do not have any intuitive sense that somebody might be less likely to listen to the message of a speech if they know that it’s being read from a script designed to maximise conversion. They are surprised that somebody interested in EA might be unhappy to discover that the committee members have been recording the details of their conversation in a CRM without asking.
But I can personally confirm that I and several other people find this really aversive. One of my friends said he would “run far” if, in almost any context, someone tried to persuade him to join a group by giving a verbatim speech from a script written by someone else. Even if the group seemed to have totally innocuous beliefs, he thought it would smack of deception and manipulation.
Another red flag is the general attitude of persuading rather than explaining. Instead of focusing on creating a space for truth-seeking—learning useful tools and asking important questions—it seems like many community builders see their main job as persuading people of certain important truths and coaxing them into entering certain careers. One admitted to me that, if there were a series of moves they could play to convert a new undergrad into an AI safety researcher or someone working on another job that seems important, they would ‘kind of want to’ play those moves. This is a very different approach from giving exceptional people the ‘EA toolkit’ and helping them along their journey to figuring out how to have the biggest impact they can.
EA may not in fact be a form of Pascal’s Mugging or fanaticism, but if you take certain presentations of longtermism and X-risk seriously, the demands are sufficiently large that it certainly pattern-matches pretty well to these.
And more generally, I find it odd to know that people are being put in charge of student groups who have only known about effective altruism for single digit months. Even if they’re not being directly hired by CEA/OpenPhil, they’re still often being given significant resources and tasked with growing their groups. This is an obvious environment for misalignment to creep in, not through any malice but just through a desire to act quickly without a real grip on what to do.
Part 3 - My model of what is going wrong
My central and most important worry is that activities doing something close to optimising for the number of new HEAs will disproportionately filter out many of the people it’s most valuable to engage. I’ll reiterate the list of things we need more than technical AI expertise:
Government and policy experts
Management
The ability to really figure out what matters most and set the right priorities
Skills related to entrepreneurship / founding new organizations
One-on-one social skills and emotional intelligence
My impression is that there are some people who will, when presented with the arguments for Effective Altruism, pretty quickly accept them and adopt something approximating the EA mindset and worldview. I think that the people who excel in some of the areas I’ve listed above are significantly less likely to also be the kinds of people who get engaged quickly. I’ll lay my thoughts out in detail, but first let me give an easy example: “The ability to really figure out what matters most and set the right priorities”
People who care a lot about what matters most are likely to be the kinds of people who don’t just go along with arguments. They’ll be the kind that push back, pick holes, and resist attempts to be persuaded. I think it would be tempting to assume that the best of these people will already have intuited the importance of scope sensitivity and existential risk, and that they’ll therefore know to give EA a chance, but that’s not how it works. The community needs to contain people who won’t take the importance of existential risk seriously until they’ve had some time to think hard about it, and it will take more effort to get such people engaged.
If you don’t intentionally encourage the kinds of people who instinctively pick holes in arguments while you’re presenting EA to them for the first time, your student group is not going to produce people who are fantastic at coming up with thoughtful and interesting criticisms. I can point to specific people who I believe have useful criticisms of EA, but who have no interest in getting hired to write them up even if it can be funded, because they just don’t care that much about EA, because when they tried to present criticism early on they were ignored.
I’m going to address the following points in this order:
Noticing the problem is itself hard, but too much focus on creating HEAs will sometimes cause you to miss the most impactful people
A speculative model of things going wrong
If these problems are real, they’re systemic
Scaling makes them worse
The faster your community is growing, the less experienced the majority of members will be.
After that, I will at least try to offer some recommendations.
Noticing the problem is itself hard, but too much focus on creating HEAs will sometimes cause you to miss the most impactful people
I think the basic problem is that firstly we might be failing to consider hard-to-measure factors, and secondly, we might be overweighing easy-to-measure factors. These are of course intimately connected.
On the first point: Zealous community building might sometimes cause big downsides that are really hard to measure. If somebody comes to an intro talk, leaves, and never comes back, you don’t usually find out why. Even if you ask them, you probably can’t put much weight on their answer: they don’t owe you anything and they might quite reasonably be more interested in giving you an answer that makes you leave them alone, even if it’s vague or incomplete. You should expect there to be whole types of reason (like ‘you guys seem way more zealous than I’m comfortable with’) which you’ll be notably less likely to hear about relative to how much people think it, especially if you’re not prioritising getting this kind of feedback.
Even worse, if something about EA switches them off before they even come to the intro talk, you won’t even realise. If something you say in your talk is so bad that it causes someone to go away and start telling all their most promising and altruistic friends that EA is a thinly-veiled cult, you will almost never find out—at least not without prioritising feedback from people who are no longer engaged.
Second, despite some pushback, current EA community building doctrine seems to focus heavily on producing ‘Highly Engaged EAs’ (HEAs). It is relatively easy to tell if someone is a HEA. The less engaged someone is, the harder it is to tell. Unfortunately, sometimes there will be people who will take longer to become HEAs (or perhaps forever), but who will have a higher impact than the median HEA even in proportion to however long it takes them to become however engaged they might become.
I think the model of prioritising HEAs does broadly make sense for something like AI safety: one person actually working on AI safety is worth more than a hundred ML researchers who think AI safety sounds pretty important but not important enough to merit a career change. But elsewhere it’s less clear. Is one EA in government policy worth more than a hundred civil servants who, though not card-carrying EAs, have seriously considered the ideas and are in touch with engaged EAs who can call them up if need be? What about great managers and entrepreneurs?
I don’t actually know the answer here, but what I do know is that the first option—one HEA in a given field—is much easier to measure and point to as evidence of success.
To be really clear, I’m not advocating for an absolute shift in strategy away from HEAs to broader and shallower appeal. What I’m saying is that I don’t think it’s clear-cut, but a focus on measurably increasing the number of HEAs is likely to miss less legible opportunities for impact.
Why can’t people appreciate the deep and subtle mysteries of community building? Well, this is where Goodhart’s Law crops up: a measure that becomes a target to be optimised ceases to be a good measure.
The main way Goodhart’s Law kicks in is that the people setting strategy have a much more nuanced vision than the people executing it. The reason everyone’s pushing for community building, I believe, is that people right in the heart of EA thought about what a more effective and higher-impact EA would look like, and what they pictured was an EA which was much larger and contained many more highly-engaged people. Implicit in that picture were a bunch of other features—strong capacity for coordination, good epistemics, healthy memes, and so on.
But when that gets distilled down to “community building” and relayed to people who have only been in university for a year or so, quite understandably they don’t spontaneously fill in all the extra details. What they get is “take your enthusiasm for EA, and make other people enthusiastic, and we’ll know you’re doing well if at the end of the year there are more HEAs”.
But often the best way to make more HEAs is not the best way to grow the community!
A speculative model of things going wrong
This is a bit more speculative but I’d like to sketch out a model for how this plays out in a bit more detail. I’d like to conjure up two hypothetical students, Alice and Bob, at their first EA intro fellowship session.
Alice
Alice has a lot of experience with strange ideas. She’s talked to communists, alt-rights, crypto bros, all kinds of people. She’s very used to people coming along with an entirely new perspective on what’s important, and when they set the parameters, she expects them to have arguments she can’t reply to, because she’s an undergrad and they’re cribbing their notes from professors, and sometimes literally reciting arguments off a script. Of course she actually quite likes sitting down and thinking through the problems—she enjoys the intellectual challenge. She knows the world is full of Pascal’s Muggers. She doesn’t know if EAs are muggers, but she knows they like getting people to promise to give away 10% of their income (which sounds to her like a church tithe), and she’s heard they sweep people away on weekend retreats. Still, she can appreciate that if they are right, what they’re doing is important, so she suspends her judgement.
At the opening session she disputes some of the assumptions, and the facilitators thank her for raising the concerns, but don’t really address them. They then plough on, building on those assumptions. She is unimpressed.
Bob
Bob came to university feeling a bit aimless. He’s not really sure what he wants to do with his life, or how he should even decide. Secretly he’d kind of like it if someone could just tell him what he was meant to do, because sometimes it feels like the world’s in a bad state and he doesn’t really get why or how to fix it. So when he hears the arguments in the opening session he’s blown away. He feels like he’s been handed a golden opportunity: if they’re right, he can be a good person, who does important work, with a close group of friends who all share his goals and values.
Are they right? He’s not sure. He’s never really considered these arguments but they seem very persuasive. And the organisers keep talking about epistemics, and top researchers. If they’re wrong, he’s not even sure how he’d tell, but if they’re right then it’s pretty important that he starts helping out right away. And he kind of wants them to be right.
If these problems are real, they’re systemic
We should expect that new EAs doing community building will misunderstand high-level goals in systematic ways.
What this means is, it’s not just that some random cluster of promising people will get missed, it’s that certain kinds of promising people will get missed, consistently, and EA as a whole will shift its composition away from those kinds of people. To be clear, this isn’t absolute: it’s not that everyone capable of criticism is filtered out, it’s that every group that prioritises producing HEAs will be slightly filtering against it and the effects will compound across the entire community.
If you’ve been told that CEA has hired you as a community builder you because they think that counterfactually this will lead to ten more HEAs, and indeed, you think that it’s really very important to get more HEAs so that there are more people working on the biggest problems, and you meet an Alice and a Bob, well, maybe you’d rather talk to Bob about how to get into community building instead of talking to Alice about alternative foundations to the Rescue Principle.
And maybe this really is the right choice in individual cases. The problem is if it gradually accumulates. Eventually EA as a whole becomes more Bob than Alice, not just in terms of how many people with really fantastic epistemics there are, but also in terms of the epistemic rigour of the median HEA.
Personally the thing I’m most worried about is that this effect starts to wreck EA group epistemics and agency. I’ve seen little traces here and there which have given me concerns, although I don’t yet feel I can confidently claim that this is happening. But I think it’s really really important that we notice if it is, so that we can stop it. And this phenomenon is hard to notice.
Ironically, we should expect community building to tend towards homogeneity because community builders will beget other community builders who find their strategies compelling. And we should expect this to tend towards strategies that are easy to quickly adopt.
There has been some emphasis lately on getting community builders to develop their own ‘inside views’ on important topics like AI safety, partly so that they can then relay these positions with higher fidelity. I welcome this, but I don’t think it’s sufficient to solve the problem of selecting against traits we value. Understanding AI safety better doesn’t stop you from putting people off for any reason other than your understanding of AI safety.
A little while after I first drafted this post, there was a popular forum post entitled “What psychological traits predict interest in effective altruism?” I commend the impulse to research this area but I can very easily picture how it goes wrong, because while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made up of such people would automatically be better than this one.
Scaling makes them worse
It might now occur to you that not everyone joins EA through student groups. Some people come from LessWrong, some people see a TED Talk, some people just stumble across the forum. It’s true, and these will all be filtering in different kinds of people with different interests and values.
As the community changes, the way it grows will change. The thing to avoid is a feedback loop that sends you spiralling somewhere weird. Unfortunately this is exactly what you encourage when you try to scale things up. The easier something is to scale, well, the faster you’ll scale it.
If you have a way of community building which produces ten HEAs in a year, two of which will be able to follow in your footsteps, you will very quickly be responsible for the majority of EA growth. The closer a student organiser is to creating the maximum number of HEAs possible, the more likely they are to be Goodharting: trading away something else of value for more HEAs.
And bear in mind: the faster you’re growing, the newer the median member of the community will be. If EA doubled in size every year then half of EAs would only have been EAs for a year. And if any portion of EA managed to crack a way of doubling in size every year, it would very quickly make up the majority of the community.
The faster your community is growing, the less experienced the majority of members will be.
Concretely, I worry that university groups risk instantiating this pattern. The turnover is quick, and the potential for rapid growth and scaling is a big selling point.
I imagine that older EA groups will have had both time to grow and time to consider downside risks. They’ll have more experienced members who can be more careful, and also less of a pressure to expand. On the other hand, newer groups will be saddled with both less experience and more desire and opportunity to scale up quickly.
It’s also generally hard, as someone with experience, to remember what it was like being inexperienced, and what was or wasn’t obvious to you. It’s easy to assume that people understand all the subtext and implications of your claims far more than they actually do. We need to actively resist this when dealing with newer, more inexperienced community builders.
Part 4 - Why to prioritise this problem, and what to do about it
You might think that, while this problem exists, it’s not worth focusing resources on it because it’s not as high-priority as problems like AI safety research. If better epistemics trades off against getting more alignment researchers, maybe you think it’s not worth doing. However, it’s not clear at all that this is the case.
First, AI Safety researcher impact is long-tailed, and I claim that the people on the long tail all have really unusually good epistemics, such that trading against good epistemics in favour of more AI safety researchers risks trading the best for the worst.
Second, most groups in history have been wrong about some significant things, including groups that really wanted to find the truth, like scientists in various fields. So, our strong outside view should be that, either at the level of cause prioritisation or within causes, we are wrong about some significant things. If it’s also sufficiently likely that some people could figure this out and put us on a better path, then it seems really bad that we might be putting off those very people.
Third, imagine a world in which EA student groups are indeed significantly selecting against traits we value. Ask yourself if, in this world, things might look roughly as they do now. I think they might. It’s easy to let motivated reasoning slip in when one wants to avoid acknowledging a tradeoff—for example, I’ve often told myself I have time to do everything I want to do, when in fact I don’t. This problem could be happening, and you might think this problem isn’t happening even if it is! Until we spend some resources getting information, our uncertainty about whether / how badly the problem is happening should push us towards prioritising the problem. (It might be really bad, and we don’t yet know if it is!) If we later found out that it wasn’t a big deal, we could deprioritise it again.
For all the same reasons that doing community building is important, it is really important to do it right.
So what do you do about all this?
It’s probably not enough just to acknowledge that it might be a problem if you don’t prioritise it. It’s also not enough (though it may be useful) to select for virtuous traits when choosing, for example, your intro fellows. Even if you do this, you will still miss out on anyone who is put off after the selection process, or who doesn’t even apply because they’ve heard that EA is a weird cult.
Honestly, it’s hard. I have to admit the limits of my own knowledge here: I don’t know what constraints community builders are acting under, or what the right balance between these factors is. Moreover, the issue I’m pointing to is, in the most general terms, that there are lots of hard-to-measure things which people might not be properly measuring. It’d be very easy, I suspect, to read this post and think “Look at all these other factors I hadn’t considered! Well, I’d better start considering them,” and move on, when in fact what you need to do is one meta-level up: start looking for illegible issues, and factors that nobody else has even considered yet.
So, now that you know that I don’t have all the answers, and that literally following my advice as written will only sort of help, here is my advice.
Don’t actually think in terms of producing more HEAs. Yes, good community building will lead to more HEAs, but producing more HEAs is not enough to make what you’re doing good community building.
If you’re high-status within EA, think carefully about how you react to community builders who seem to create lots of HEAs. Don’t just praise them, but also coach them and monitor them. The more HEAs created, the more you should be suspicious of Goodharting (despite the best of intentions), so work together to avoid it.
Consider the downside risks from activities you’re running.
A useful framework is to seriously consider what types of people might be put off / selected against by an activity, and a good list of types to start with is the ones the Leaders Forum says EA needs.
Adopt the rule of thumb: ‘If many people would find it creepy if they knew we were doing x, don’t do x.’
Notice that many EA community builders seem to have different norms from other students in this regard. Especially if you didn’t think things like reading pre-scripted persuasive speeches or recording details from 1-on-1s in a CRM without asking seemed sinister, default to asking a handful of (non-EA) friends what they think before introducing a new initiative.
It might be helpful for there to be a community-building Red Team organisation, which could scrutinise both central strategies (e.g. from CEA or OpenPhil) and the activities of individual student groups.
Assume that people find you more authoritative, important, and hard-to-criticise than you think you are. It’s usually not enough to be open to criticism—you have to actually seek it out or visibly reward it in front of other potential critics.
Maybe try things like giving pizza to intro fellows who left in exchange for feedback.
You want good feedback from everyone, not just those who you thought would be highly impactful, since it’s easy for someone to be put off EA based on the message they hear for any other person, whether or not that person has potential for high impact.
No-one seems sure how much low-fidelity / misleading messages about EA are being spread. It would be great (and at least partly tractable) to research this.
Be open to changing your mind. I know this is kind of overplayed, but there’s a whole sequence on it, and it’s pretty good. Remember that the marginal value of another HEA is way lower than the marginal value of an actual legitimate criticism of EA nobody else has considered yet.
Seriously consider adding more events geared towards presentation of ideas than persuasion.
Don’t offer people things they want in exchange for self-identification and value adoption. Free pizza for showing up to a discussion group is fine, but if people feel like they’ll get respect and a friendship group only if they go around saying “AI safety seems like a big deal”, then that will be why some of them go around saying “AI safety seems like a big deal”.
Message me. I’ll try to reply to comments and messages. It’s hard for me to predict in advance what parts of this will or won’t be clear, so I invite you to tell me what doesn’t make sense.
Read these articles, if you feel so inclined (ranked from most to least useful in my opinion):
https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs
https://www.lesswrong.com/posts/4PPE6D635iBcGPGRy/rationality-common-interest-of-many-causes
https://meaningness.com/metablog/upgrade-your-cargo-cult
- Leaning into EA Disillusionment by 21 Jul 2022 17:06 UTC; 425 points) (
- University EA Groups Need Fixing by 3 Aug 2023 21:06 UTC; 371 points) (
- Most students who would agree with EA ideas haven’t heard of EA yet (results of a large-scale survey) by 19 May 2022 17:24 UTC; 270 points) (
- EA career guide for people from LMICs by 15 Dec 2022 14:37 UTC; 252 points) (
- What I learned from the criticism contest by 1 Oct 2022 13:39 UTC; 170 points) (
- Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter by 15 May 2022 16:38 UTC; 165 points) (
- Announcing: EA Engineers by 4 Jul 2022 4:04 UTC; 159 points) (
- The EA movement’s values are drifting. You’re allowed to stay put. by 24 May 2022 0:31 UTC; 136 points) (
- High-Impact Psychology (HIPsy): Piloting a Global Network by 29 Sep 2022 13:08 UTC; 113 points) (
- EA’s Culture and Thinking are Severely Limiting its Impact by 26 Jul 2022 11:10 UTC; 96 points) (
- 8 Dec 2022 20:14 UTC; 88 points) 's comment on Learning from non-EAs who seek to do good by (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- The Many Faces of Effective Altruism by 18 May 2022 19:23 UTC; 85 points) (
- 18 May 2022 6:36 UTC; 85 points) 's comment on Some potential lessons from Carrick’s Congressional bid by (
- EA culture is special; we should proceed with intentionality by 21 May 2022 21:55 UTC; 85 points) (
- EA Culture and Causes: Less is More by 16 Aug 2022 14:04 UTC; 81 points) (
- EA movement course corrections and where you might disagree by 29 Oct 2022 3:32 UTC; 79 points) (
- EA is becoming increasingly inaccessible, at the worst possible time by 22 Jul 2022 15:40 UTC; 77 points) (
- How To Prevent EA From Ever Turning Into a Cult by 30 Jun 2022 6:37 UTC; 65 points) (
- Monthly Overload of EA—June 2022 by 27 May 2022 15:48 UTC; 62 points) (
- Self investment I think community builders should do by 7 Jul 2022 16:49 UTC; 52 points) (
- 1 Oct 2022 3:02 UTC; 47 points) 's comment on Winners of the EA Criticism and Red Teaming Contest by (
- Should EA ‘communities’ be ‘professional associations’? by 4 Apr 2024 4:50 UTC; 44 points) (
- Be less trusting of intuitive arguments about social phenomena by 18 Dec 2022 1:11 UTC; 43 points) (
- 14 Jul 2022 17:30 UTC; 42 points) 's comment on Criticism of EA Criticism Contest by (
- University groups as impact-driven truth-seeking teams by 14 Mar 2024 6:43 UTC; 39 points) (
- A retrospective on EA at ENS Paris, focused on obstacles. by 13 Aug 2023 10:56 UTC; 39 points) (
- 18 May 2022 23:12 UTC; 38 points) 's comment on Some potential lessons from Carrick’s Congressional bid by (
- The dangers of high salaries within EA organisations by 10 Jun 2022 7:54 UTC; 38 points) (
- Is it possible for EA to remain nuanced and be more welcoming to newcomers? A distinction for discussions on topics like this one. by 15 Jul 2022 7:03 UTC; 35 points) (
- We need more discussion and clarity on how university groups create value by 28 Jun 2022 15:56 UTC; 35 points) (
- What are some measurable proxies for EA community health? by 14 Jul 2022 6:26 UTC; 32 points) (
- Should you still use the ITN framework? [Red Teaming Contest] by 14 Jul 2022 4:02 UTC; 32 points) (
- Some non-EAs worry about EA’s effect on mental health by 4 Aug 2022 0:12 UTC; 31 points) (
- The Community Manifesto by 31 Aug 2022 23:44 UTC; 26 points) (
- Consolidation of EA criticism? by 29 Sep 2024 16:51 UTC; 25 points) (
- 22 Sep 2022 3:45 UTC; 23 points) 's comment on Why Wasting EA Money is Bad by (
- Changes in Community Dynamics: A Follow-Up to ‘The Berkeley Community & the Rest of Us’ by 9 Jul 2022 1:44 UTC; 21 points) (LessWrong;
- 23 Oct 2024 0:41 UTC; 17 points) 's comment on Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now) by (LessWrong;
- Proposal for Scaling Community Building by 26 Feb 2024 1:13 UTC; 13 points) (
- What a Large and Welcoming EA Could Accomplish by 22 Aug 2022 12:17 UTC; 12 points) (
- 3 Jun 2022 20:00 UTC; 11 points) 's comment on Announcing a contest: EA Criticism and Red Teaming by (
- 1 Oct 2022 10:31 UTC; 10 points) 's comment on We all teach: here’s how to do it better by (
- 16 May 2022 12:13 UTC; 8 points) 's comment on Deferring by (
- High-Impact Psychology (HIPsy): Piloting a Global Network by 29 Sep 2022 18:16 UTC; 8 points) (LessWrong;
- More to explore on ‘What do you think?’ by 9 Jul 2022 23:00 UTC; 7 points) (
- 10 Sep 2022 16:51 UTC; 6 points) 's comment on Should you still use the ITN framework? [Red Teaming Contest] by (
- 2 Jun 2022 15:44 UTC; 5 points) 's comment on Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter by (
- 27 Jul 2022 16:02 UTC; 5 points) 's comment on EA’s Culture and Thinking are Severely Limiting its Impact by (
- 6 Oct 2022 0:13 UTC; 1 point) 's comment on We all teach: here’s how to do it better by (
- 9 Aug 2022 15:33 UTC; 1 point) 's comment on EA can sound less weird, if we want it to by (
I have been community building in Cambridge UK in some way or another since 2015, and have shared many of these concerns for some time now. Thanks so much for writing them much more eloquently than I would have been able to, thanks!
To add some more anecdotal data, I also hear the ‘cult’ criticism all the time. In terms of getting feedback from people who walk away from us: this year, an affiliated (but non-EA), problem-specific table coincidentally ended up positioned downstream of the EA table at a freshers’ fair. We anecdotally overheard approx 10 groups of 3 people discussing that they thought EA was a cult, after they had bounced from our EA table. Probably around 2000-3000 people passed through, so this is only 1-2% of people we overheard.
I managed to dig into these criticisms a little with a couple of friends-of-friends outside of EA, and got a couple of common pieces of feedback which it’s worth adding.
We are giving away many free books lavishly. They are written by longstanding members of the community. These feel like doctrine, to some outside of the community.
Being a member of the EA community is all or nothing. My best guess is we haven’t thought of anything less intensive to keep people occupied due the historical focus on HEAs, where we are looking for people who make EA their ‘all’ (a point well made in this post).
Personally, I think one important reason the situation is different now to how it was some years ago is EA has grown in size and influence since 2015. It’s more likely someone has encountered it online, via 80k or some podcast. In larger cities, it’s more likely individuals know friends who has been to an EA event. I think we have ‘got away with’ people thinking it’s a cult for a while because not enough people knew about EA. I like to say that the R rate of gossip was < 1, so it didn’t proliferate. I feel we’re nearing or passing a tipping point that discussing EA without EA members present becomes an interesting topic of conversation for non-EAs, since people can relate and have all had personal experiences with the movement.
In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity. Important caveats for the record, I still think EA-aligned motivations are important, and I am still a big supporter of the EA Cambridge group, and I think it is run by conscientious people with good support networks :-)
The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.
If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:
“Too much” spending
billionaire funding/asking people to donate income
most “epistemic issues”, especially with success in multiple cause areas
If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.
I broadly agree with this, but at least with AI safety there’s a Goodharting issue: we don’t want AIS researchers optimising for legibly impressive ideas/results/writeups.
I assume there’s a similar-in-principle issue for most cause areas, but it does seem markedly worse for AIS. (given lack of meaningful feedback on the most important issues)
There’s a significant downside even in having some proportion of EA AIS researchers focus on more legible results: it gives a warped impression of useful AIS research to outsiders. This happens by default, since there are many incentives to pick a legibly impressive line of research, and there’ll be more engagement with more readable content.
None of this is to say that I know e.g. MIRI-style research to be the right approach.
However, I do think we need to be careful not to optimise for the appearance of strong object level work.
I agree and think this is an argument for investing in cause specific groups rather than generalized community building.
When I was working for EA London in 2018, we also had someone tell us that the free books thing made us look like a cult and they made the comparison with free Bibles.
One option here could be to lend books instead. Some advantages:
Implies that when you’re done reading the book you don’t need it anymore, as opposed to a religious text which you keep and reference.
While the distributors won’t get all the books back (and that’s fine) the books they do get back they can lend out again.
Less lavish, both in appearance and in reality.
This is what we do at our meetups in Boston.
It’s also a nice nudge for people to read the books (I remember reading Doing Good Better in a couple of weeks because a friend/organiser had lent it to me and I didn’t want to keep him waiting).
I believe that EA could tone down the free books by 5-10% but I am pretty skeptical that the books program is super overboard.
I have 50+ books I’ve gotten at events over the past few years (when I was in college), mostly politics/econ/phil stuff the complete works of John Stuart Mill and Adam Smith, Myth of the Rational Voter, Elephant in the Brain, Three Languages of Politics, etc (all physical books). Bill Gates’ book has been given out as a free PDF recently.
So I don’t think EA is a major outlier here. I also like that there are some slightly less “EA books” in the mix like the Scout Mindset and The AI Does Not Hate You.
I think it’s not free books per se, but free books related to phrases “here’s what’s really important”, “this is how to think about morality” that are problematic in the context of the Bible comparison
I’m not sure what campus EA practices are like—but, in between pamphlets and books, there are zines. Low-budget, high-nonconformity, high-persuasion. Easy for students to write their own, or make personal variations, instead of treating like official doctrine. ie, https://azinelibrary.org/zines/
Nice. And when it comes to links, ~half the time I’ll send someone a link to the Wikipedia page on EA or longtermism rather than something written internally.
The criticisms of EA movement building tactics that we hear are not necessarily the ones that are most relevant to our movement goals. Specifically, I’m hesitant to update much on a few 18 year olds who decide we’re a “cult” after a few minutes of casual observation at a fresher’s fair. I wouldn’t want to be part of a movement that eschewed useful tools for better-integrating its community because it’s afraid of the perception of a few sarcastic teenagers.
Instead, I’m interested in learning about the critiques of EA put forth by highly-engaged EAs, non-EAs, semi-EAs, and ex-EAs who care about or share at least some of our movement goals, have given them a lot of thought, are generally capable people, and have decided that participation in the EA movement is therefore not for them.
I made this comment with the assumption that some of these people could have extremely valuable skills to offer to the problems this community cares about. These are students at a top uni in the UK for sciences, and many of whom go on to be significantly influential in politics and business, much higher than the base rate at other unis or average population.
I agree not every student fits this category, or is someone who will ever be inclined towards EA ideas. However I don’t know if we are claiming that being in this category (e.g. being in the top N% at Cambridge) correlates with a more positive baseline-impression of EA community building? Maybe the more conscientious people weren’t ringleaders in making the comments, but they will definitely hear them which I think could have social effects.
I agree that EA will not be for everyone, and we should seek good intellectual critiques from those people that disagree on an intellectual basis. But to me the thrust of this post (and the phenomenon I was commenting on) was: there are many people with the ability to solve the worlds biggest problems. It would be a shame to lose their inclination purely due to our CB strategies. If our strategy could be nudged to achieve better impressions at people’s first encounter with EA, we could capture more of this talent and direct them to the world’s biggest problems. Community building strategy feels much more malleable than the content of our ideas or common conclusions, which we might indeed want to be more bullish about.
I do accept the optimal approach to community building will still turn some people off, but it’s worth thinking about this intentionally. As EA grows, CB culture gets harder to fix (if it’s not already too large to change course significantly).
I also didn’t clarify this in my original comment. It was my impression that many of them had had already encountered EA, rather than them having picked this up from the messaging of the table. It’s been too long to confirm for sure now, and more surveying would help to confirm. This would not be surprising though, as EA has a large presence at Cambridge than most other unis (and not everyone at freshers’ fair is a first year, many later-stage students attend to pick up new hobbies or whatever).
Another way of stating this is that we want to avoid misdirecting talent away from the world’s biggest problems. This might occur if EA has identified those problems, effectively motivates its high-aptitude members to work on them, but fails to recruit the maximum number of high-aptitude members, due to CB strategies optimized for attracting larger numbers of low-aptitude members.
This is clearly a possible failure mode for EA.
The epistemic thrust of the OP is that we may be missing out on information that would allow us to determine whether or not this is so, largely due to selection and streetlamp effects.
Anecdata is a useful starting place for addressing this concern. My objective in my comment above is to point out that this is, in the end, just anecdata, and to question the extent to which we should update on it. I also wanted to focus attention on the people who I expect to have the most valuable insights about how EA could be doing better at attracting high-aptitude members; I expect that most of these people are not the sort of folks who refer to EA as a “cult” from the next table down at a Cambridge fresher’s fair, but I could be wrong about that.
In addition, I want to point out that the character models of “Alice” and “Bob” are the merest speculation. We can spin other stories about “Cindy” and “Dennis” in which the smart, independent-minded skeptic is attracted to EA, and the aimless believer is attracted to some other table at the fresher’s fair. We can also spin stories in which CB folks wind up working to minimize the perception that EA is a cult, and this having a negative impact on high-talent recruitment.
I am very uncertain about all this, and I hope that this comes across as constructive.
A friendly hello from your local persuasion-resistant moderately EA-skeptic hole-picker :)
Nice to see you here, Ferenc! We’ve talked before when I was at OpenAI and you Twitter, and always happy to chat if you’re pondering safety things these days.
Hi, thank you for starting this conversation! I am an EA outsider, so I hope my anecdata is relevant to the topic. (This is my first post on the forums.) I found my way to this post during an EA rabbit hole after signing up for the “Intro to EA” Virtual Program.
To provide some context, I heard about EA a few years ago from my significant other. I was/am very receptive to EA principles and spent several weeks browsing through various EA resources/material after we first met. However, EA remained in my periphery for around three years until I committed to giving EA a fair shake several weeks ago. This is why I decided to sign up for the VP.
I’m mid-career instead of enrolled in university, so my perspective is not wholly within the scope of the original post. However, I like to think that I have many qualities the EA community would like to attract:
I (dramatically) changed careers to pursue a role with a more significant positive impact and continue to explore how I can apply myself to do the “most good”.
I’m well-educated (1 bachelor’s degree & 2 master’s degrees)
As a scientist for many years, I value evidence-based decision-making and rationality, both professionally and personally.
I have professional experience managing multiple projects with large budgets and diverse stakeholders. This requires 3 of the six skills listed in the top talent gaps identified on your Leadership Forum (as mentioned in the original post).
As a data scientist, I have practical & technical expertise in machine learning (related to the last talent gap in the list mentioned above).
I’m open-minded. (No apparent objective evidence comes to mind. I suppose you’ll have to talk to me and verify for yourselves. :-) )
If we agree that EA would prefer to attract rather than “turn off” people with these qualities, then the following introspections regarding my resistance to participating in the movement may be helpful:
The heavy, heavy focus on university recruiting feels …. off.
First, let me emphasise that I understand all the practical reasons for focusing on student outreach. @Chris Long does a great job listing why this is an actionable, sensible strategy in this thread. I understand and sympathise with EA’s motivations. My following points are from an “EA outsider” perspective and others who may not care enough to consider the matter more deeply than their initial impression.
Personally, ‘cult’ didn’t immediately come to mind despite being a common criticism many of you encountered. Still, the aggressive focus on recruiting (primarily young) university students can seem a bit predatory. When there is a perceived imbalance in recruitment tactics, red flags can instinctively pop up in the back of people’s minds.
The EA community seems homogeneous—and not just demographically.
The homogeneity is a natural consequence of the heavy focus on university outreach. Whenever I encounter EAs, I’m generally the oldest … and I’m only in my 30′s! (Is there a place in this community if you’re not fresh out of uni?) The youthful skew of the community contributes to an impression that there is a small group of influential figures dictating the vision/strategy of the movement and a mass of idealistic, young recruits eager to execute on it. People who get things done want to find other people who get things done. It’s not reassuring if it feels like the movement is filled with young (albeit talented & intelligent) people who can barely be trusted with leading student groups (requiring scripts, strict messaging, etc.).
Since the aggressive university outreach focuses on prestigious institutions, the group can seem elitist. Again, I understand the cold realities of this world mean that there are practical considerations for supporting this approach. As an outsider, it isn’t easy to discern if the pervasive mentions of top institutions are for practicality or for signalling. I also understand the importance of epistemic alignment. However, when the EA Global application requirement (as an example) is juxtaposed alongside aggressive recruitment at top universities, it starts to seem like EA is looking for “the right kind of people” to join their club in a less benign sense. Admittedly, I have a giant chip on my shoulder from my upbringing on the wrong side of the socio-economic tracks. Even with that self-awareness (and a Berkeley degree), some of my hesitancy to engage is the concern that my value to the community would not be judged mainly on the merit of my contributions but rather on my academic pedigree. I value my time and energy too much to play those games.
Breaking into the hive mind
EAs seem uniformly well-informed and studied on a core body of seminal studies, books, websites, and influential figures. Objectively, it’s a credit to your community that there is such high engagement and consistency in your messaging. To an outsider, it feels like a steep learning curve before being considered a “real EA”. (Is there an admission exam or something? Do I need to recite Peter Singer from memory? :-) ) This is more of a compliment than anything. Maybe just be mindful of what you’re trying to achieve when you name-drop, cite a study, or reference philosophy terminology in conversation. Is the motivation in doing so to communicate clearly or to posture? At best, EA newbies may feel intimidated. At worst, they/we may get defensive.
To a natural sceptic and critical thinker, the uniformity also feels a little like indoctrination. What are the areas of active constructive disagreement? Does the community accept (or even encourage) dissenting (but well-reasoned) opinions? What are the different positions? What are the caveats of the seminal studies? It’s not apparent on the surface, and free-thinkers are generally repelled at the notion of conformity for the sake of belonging. (In the “Intro to EA” Virtual Program syllabus, I noticed that there is attention to EA critiques. I’m looking forward to experiencing how that conversation is facilitated.)
Does EA care about anything other than AI safety nowadays?
I’ve read about all these significant EA initiatives tackling malaria, global poverty, factory farming, etc., during my first exploration of the movement a few years ago. But nowadays, it seems that all I hear about is AI safety. Considering how challenging it is to forecast existential risk, are you really so confident that this one cause is the most impactful, most neglected, and most tractable that it warrants overshadowing all the other causes? I agree that AI safety is an important cause that deserves attention. However, the fervour around it seems awfully reminiscent of the “Peak of Inflated Expectations” on the Gartner Hype Cycle. It’s not so much that I have anything against AI safety, in particular. The impression of “hype” itself is not a great look if someone is seeking a community of critical thinkers to engage. Combined with the homogeneity of the community, it makes me suspicious of “group think”.
I want to explicitly state that I know that not all of these impressions are entirely true. I know that EAs aren’t all out-of-touch, pretentious jerks. The 80,000 hours job board has several postings across many cause areas aside from AI safety. The impressions described above are primarily from my perspective before actively trying to vet my concerns. However, I imagine that others who share these impressions don’t bother to validate their concerns before dismissing the movement.
So why did I go through the trouble of digging deeper? Well, probably because EA is the closest I’ve found to a community consistent with my own values, motivations, and interests. Despite my reservations, I really want my concerns to be wrong and for EA to work. More importantly, I’ve grown to trust the values, motivations, judgement, and competency of my significant other, who is committed to EA’s mission. Through him, I’ve met other EAs who are also great people. Quality people tend to attract other quality people. For this reason, @Theo Hawking’s imperative to pause and reflect on a)what EA considers a quality conversion and b)if current EA practices are attracting/repelling quality conversions is a worthy exercise.
On a final note, I suspect the comments about the free books or 10% tithing to charity heard from people to explain their “cult” label of EA are merely convenient justifications and don’t address the core of their impression. After all, why would they bother investing effort to pinpoint and articulate the sources of their general negative feeling about the movement if they’re already disengaged? I suspect that the “cult” feeling has more to do with the homogeneity and “group think” concerns I described above. To combat these negative impressions, I’d recommend:
Diversify your recruitment tactics. I particularly liked the suggestion about recruiting around specific cause areas mentioned by @Jamie Bernardi. I suspect this will also help with your talent gaps. Representation at adjacent conferences/events would also be a channel to reach established professionals. As I was exploring how I might do the most good before I heard of EA, I attended many events like the Data for Good Exchange 2019 (bloomberg.com) and would have been very receptive to hearing about EA there.
Emphasise the projects and the work. @Charles He hit the nail on the head. I would go even further than just aiming to have the best leaders in cause areas. Are EA orgs/work generally respected and well-regarded by other players in the cause area? In other words, does EA “play well with others”, or are you primarily operating in your own bubble? Suppose EA is objectively and demonstrably doing great work. In that case, other major players should be open to adopting similar practices and further magnifying the impact. If that’s not happening, does EA have the self-awareness to understand why and act upon it?
In conversations with outsiders, favour tangible issues/outcomes and actionable ideas instead of thought experiments. (My perspective skews to the practical, so feel free to discount my emphasis on this point depending on your role in EA.) If the aim is to get more people excited about doing the most good, then describe the success of the Against Malaria Foundation or the scale of impact specific government policies may have rather than using the trolley problem to discuss utilitarianism. Yes, thought experiments are both fun and valuable, but there is a time and a place.
Be accepting of varying styles of communication around ideas and issues. Not everyone interested in cause areas or doing “the most good” will be fluent in philosophy or psychology. If we can communicate concepts, thoughts, or ideas reasonably and productively, it’s often unnecessary to derail the conversation on a pedantic tangent. Don’t treat me like an unenlightened pleb if you have to explain the connection for why you name-dropped a researcher I hadn’t heard of during our conversation. (This is somewhat tongue-in-cheek. :-) )
I hope my diatribe will be received constructively because I am invested in seeing EA succeed regardless if I consider myself an EA at the moment. Anecdata is not rigorous, so who knows how generalisable my data point is. However, upon reading this thread, I realised that my complicated disposition towards EA is not uncommon and decided to share my viewpoint. Whatever that’s worth. :-)
Thanks so much for sharing your thoughts in such detail here :)
Thank you for raising this issue. You are in your 30s, I am in my 50s and I am part way through the Intro to EA program. If you can feel an outsider at 30 something, imagine how it might be for a 50 something.
These are briefly my thoughts
There is such a predominance of youth, there is a sense that much of this has not been thought about before and therefore my lived experience has not much merit. Yet I have lived the life of an EA even if it had no name.
There is a a certain complacency in the idea that EA is using science for decision making (I noted Toby Ord’s reference to that in a talk ) without perhaps remembering that scientists are simply biased humans too. Galton was a much lauded academic statistician but perfected eugenics.
I have a bias here as someone whose neurodiversity means I have significant issues with mathematical concepts but yet managed to understand the excess risk taken in the City in 2006. I left my legal role as I was exhausted defending the spread of the much praised skills of hedge funders etc. I remain convinced that there is a substantial failure to admit that pure human behaviours are very strong over-rulers. Dominant men had new toys and they would be used—something that I felt comes through strongly in that excellent Forum post on the race for the nuclear bomb and had already begun to come through to me around AI (I had created a short cut explanation in my head ‘oh it’s the usual race thing and some overpowerful man will just one day set something going because he can’).
It is hard to find answers to what feels like some very basic questions; such as the choice of charities on Give Well. It seems to me that some hard questions don’t even get asked, for eg why should charitable donations make good what a Nigerian government is failing to do in its own program to distribute Vitamin A? I have searched for criteria that might address the choice of charity but cannot find them. I do not understand why there is no prioritising vaccine for Malaria over reducing risk of catching. This is of particular interest to me as I was the founding trustee of a charity in the UK that has its parent in the US. My scout mindset has still found no reason to doubt my support of it. I wonder about the potential for Give Well acting as a funnel that might adversely affect other charities—creating its own neglectedness criteria. I raise these in the Intro discussions but there is no traction or explanation.
I hope that somehow I will find my place within the EA world—maybe I can set up “EA for Oldies; your contribution is relevant too”? I understand that I have only been looking at the Forum for a month or so; if someone can point to any area that does consider how those of us towards the end of our careers can contribute, I would be very grateful.
Thanks so much for sharing your perspective in such detail! Just dropping a quick comment to say you might be interested in this post on EA for mid-career people by my former colleague Ben Snodin if you haven’t seen it. I believe that he and collaborators are also considering launching a small project in this space.
Thanks for the lead! The post you linked seems perfectly suited to me. I’ll also contact Ben Snodin to inquire about what he may be working on around this matter.
For onlookers, there’s also a website by Ben (my coworker) and Claire Boine.
While the post and this comment are now both ancient, I feel compelled to at least leave a short note here after reading them.
My background is in many ways similar to Sarah’s and I’ve came into the contact with the EA community about half a year ago. Unfortunately, 2.5 years later, most of the points raised here resonate heavily with my experiences. Especially the hive mentality, heavy focus on students (with little efforts towards professionals) and overemphasis on AI safety (or more generally—highly-specialized cause areas overshadowing the overall philosophy).
I don’t know what the solutions are but the problem seems to be still present.
Hey Theo—I’m James from the Global Challenges Project :)
Thanks so much for taking the time to write this—we need to think hard about how to do movement building right, and its great for people like you to flag what you think is going wrong and what you see as pushing people away.
Here’s my attempt to respond to your worries with my thoughts on what’s happening!
First of all, just to check my understanding, this is my attempt to summarise the main points in your post:
My summary of your main points
We’re missing out on great people as a result of how community building is going at student groups. A stronger version of this claim would be that current CB may be selecting against people who could most contribute to current talent bottlenecks. You mention 4 patterns that are pushing people away:
EA comes across as totalising and too demanding, which pushes away people who could nevertheless contribute to pressing cause areas. (Part 1.1)
Organisers come across as trying to push particular conclusions to complex questions in a way that is disingenuous and also epistemically unjustified. (Part 1.2)
EA comes across as cult-like; primarily through appearing to be trying to hard to be persuasive, pattern matching to religious groups, coming across as disingenuously friendly (Part 1.3, your experience)
There aren’t as many ways for neartermist-interested EAs to get involved in the community, despite them being able to contribute to EA cause areas (Part 1.4)
My understanding is that you find patterns (2) and (3) especially concerning. So to elaborate on them, you’re worried about:
EA outreach is over-optimising on persuasion/conversion in a way that makes epistemically rigorous and skeptical people extremely averse to EA outreach. You feel like student group leaders are trying to persuade people into certain conclusions rather than letting people decide for themselves.
EA student group leaders are generally unaware and out-of-the-loop on how they are coming across poorly to other people.
EA student group leaders are often themselves pretty new to EA, yet are getting funded to do EA outreach. This is bad because they won’t really how best to do outreach due to being so new.
You think these worrying patterns are being driven upstream by a strategic mistake of over-optimising for a metric of “highly engaged EAs”. This is a poor choice of metric because:
A large fraction of people who could excel in an EA career won’t get engaged in EA quickly, but will be slow to come to arrive at EA conclusions due to their desire to reason carefully and skeptically. Thus you worry that these people will be ignored by EA outreach because they don’t come across as a “highly engaged EA”.
You then suggest some possible changes that student group leaders could make (here I’m just focusing on changes that SG leaders could do):
Don’t think in terms of producing “highly engaged EAs”; in general beware of over-optimising on getting people who quickly agree with EA ideas.
Try and get outside perspectives on whether what you’re doing might be off-putting to others.
Actively seek out criticisms and opinions of people who might have been put off by EA.
Seek to improve your epistemics; do the hard and virtuous thing of being open to criticism even though it’s naturally aversive.
Beware social dynamics that incentivise people to agree with conclusions in return for social approval.
Sorry that was such a long summary (and if I missed out key parts, please do let me know)! I think you’re making many great points.
Here are some of my thoughts in reply:
My thoughts in reply
Over-optimising on HEAs
I agree with all of your specific pieces of advice in your final section. I think they’re great heuristics that every person doing EA outreach should try and adopt.
My overall impression is that many student group leaders also agree with the direction of your advice, but find it hard to implement in real life because in general it’s hard to do stuff right. My impression is that most student group leaders are super overstretched, have lots of university work going on, and are only able to spend several hours per week doing EA outreach work (and generally find it stressful and difficult to stay on top of things).
I think the core failure mode of “getting the people who already initially express the most interest/agreement in EA” does go on, but I think that what drives it is a general tendency to do what’s easier (which is true of any activity) instead of necessarily over-optimising on an explicit metric. Since group leaders are so time-constrained, it’s easier for them to talk to and engage with people who already agree because they don’t have the time or patience to grapple with people who initially disagree.
If group leaders were feeling a lot of pressure to get HEAs from funding bodies, this would be super bad. I’m not sure to the extent that this is really going on that much: CEA’s HEA metric is kinda vague and I haven’t got an impression from group leaders I’ve talked to that people are trying to optimise super hard on it (would love to hear contrary anecdotes). In general I find most student groups to be small, somewhat chaotically run, and so not very good at optimising for anything in particular.
If this claim is true, then I think that would be an argument for investing more resources into student groups to get them to a state where they have more capacity to make better decisions and spend time engaging with Alice-types.
Here are some of my thoughts on EA coming across as cult-like:
I agree that EA can come off as weird and cult-like at times. I think this is because: (i) there’s a lot of focus on outreach, (ii) EA is an intense idea that people take very seriously in their lives.
I think it’s such a shame that EA comes across this way. At its core I think it’s because it’s so unusual to have communities that are this serious about things. To give a personal anecdote, when I was at university I felt pretty disillusioned and distant from my peers. I felt that things were so messed up in the world and it made me sad that many of my friends didn’t seem to notice or care. When I first met EAs I found it so inspiring how they were so serious about taking personal responsibility for making the world better; no matter what society’s default expectations were.
When I was first getting into EA I was really fervent about doing outreach, and I think I did a pretty bad job. It seemed so important to me that everyone should agree with EA ideas because of the huge amount of suffering that was going on in the world. I found it confusing and disheartening when many of those I talked to simply didn’t agree with EA or who seemed to agree but then not do anything about it. I would argue back in an unconvincing way, which made little progress. Because EA conclusions seemed obvious to me, I didn’t get how people didn’t immediately also agree.
With all that in mind, here is a quick guess of additional heuristics (beyond your suggestions) that student leaders could bear in mind:
It’s not your job to make someone an EA: I think a better framing is to view your responsibility as making sure that people have the opportunity to hear about and engage with EA ideas. But at the end of the day, if they don’t agree it’s not your job to make them agree. There’s a somewhat paradoxical subtlety to it—through coming to peace with the fact that some people won’t agree, you can better approach conversations with a genuine desire to help people make up their own minds.
Look at things from an outsider’s perspective: I don’t have immediately thoughts on tactical decisions like how to use CRMs (although I do find the business jargon quite ugh) or book giveaways. It seems to me that there are good ways and bad ways to do these sorts of things. But your suggestion of checking in with non EA-s about whether they’d find it weird seems great and so I just wanted to doubly reiterate it here!
Embrace the virtue of patience: I think it’s important to approach EA outreach conversations with a virtue of patience. It can be difficult to embrace because for EAs outreach feels so high-stakes and valuable. However if you don’t have patience then you’ll be tempted to do outreach in a hurry, which leads to sloppy epistemics or at worse deceitfulness. A patient EA carefully explains ideas, a hurried EA aims to persuade.
I think it would be a shame if we lost the good qualities of EA that make it so unique in the world—that it’s a community of people who are unusually serious about doing the most good they can in their lives. But I think we can do better as a community in not coming across cult-like by being more balanced in our outreach efforts, being mindful of the bad effects of aiming for persuasion, and coming to a peace with the idea that some people just won’t be that into EA and that’s okay (and that doesn’t make them a bad person).
Other strategy suggestions which I think could improve the status quo:
I’d be excited to see more EA adjacent and cause specific outreach. I think having lots of different brands and sub-communities broadens the appeal of EA ideas to different sorts of audiences, and lets people get involved in EA stuff to different extents (so that EA isn’t as all-or-nothing). I’d be keen to see people restart effective giving groups, rationality groups, EA outreach focused on entrepreneurs, and cause specific groups like animal welfare and AI alignment.
Thanks again for taking the time to write the post—it seems like it’s generated great discussion and that its something that a lot of people agree with :)
Thanks for this post. If true, it does describe a pretty serious concern.
One issue I’ve always had with the “highly engaged EA” metric is that it’s only a measure for alignment,* but the people who are most impactful within EA have both high alignment and high competence. If your recruitment selects only on alignment this suggests we’re at best neutral to competence and at worst (as this post describes) actively selecting against competence.
(I do think the elite university setting mitigates this harm somewhat, e.g. 25th percentile MIT students still aren’t stupid in absolute terms).
That said, I think the student group organizers I recently talked to are usually extremely aware of this distinction. (I’ve talked to a subset of student group organizers from Stanford, MIT, Harvard (though less granularity), UPenn (only one) and Columbia, in case this is helpful). And they tend to operationalize their targets more in terms of people who do good EA research, jobs, and exciting entrepreneurship projects, rather than in terms of just engagement/identification. Though I could be wrong about what they care about in general (as opposed to just when talking with me).
The pet theory I have for why people focus on “Highly Engaged EAs” as the metric most openly talked about as opposed to (e.g.) “Highly Engaged and Very Competent EAs” is that it looks more fair/egalitarian (at least in theory, anybody can become highly engaged, but very few people can become as smart as X or as good at operations as Y).
Some quick questions:
1. Can you define what you mean in your article by “Pascal’s Mugging?” I think you’re not using the formal definition but as a metonym for something else (which is totally fine, but I’m not sure what idea or cluster of ideas you’re exactly pointing to that’s more specific than “vaguely shady”).
2. Your critiques seem to mostly come from the LessWrong/rationality cluster. Do you have a sense of whether these critiques are shared (though expressed in different cultural languages) from people with good epistemics but from other cultural clusters?
*and of course, like all metrics it’s an imperfect measure for alignment, and can be gamed.
Regarding “Pascal’s Mugging”:
I am not the author, so I might well be mistaken. But I think I can relate to the intended meaning more closely than “vaguely shady”
One paragraph is
which I read as: “Pascal’s mugging” describes a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities. I think that this in itself need not be problematic (there can be huge stakes which warrant change in behaviour), but if there is social pressure involved in forcing people to accept the premise of huge moral stakes, things become problematic.
One example is the “child drowning in a pond” thought experiment. It does introduce large moral stakes (the resources you use for conveniences in everyday life could in fact be used to help people in urgent need; and in the thought experiment itself you would decide that the latter is more important) and can be used to imply significant behavioural changes (putting a large fraction of one’s resources to helping worse-off people).
If this argument is presented with strong social pressure to not voice objections, that would be a situation which fits under Pascal-mugging in my understanding.
If people are used to this type of rhetorical move, they will become wary as soon as anything along the lines of “there are huge moral stakes which you are currently ignoring and you should completely change your life-goals” is mentioned to them. Assuming this, I think the worry that
makes a lot of sense.
Thanks a lot for the explanation! It does make more sense in context of the text, though to be clear this is extremely far from the original meaning of the phrase, and also the phrase has very negative connotations in our community. So I’d prefer it if future community members don’t use “Pascal’s mugging” to mean “a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities,” unless maybe it’s locally-scoped and clearly defined in the text to mean something that does not have the original technical meaning.
It is unfortunate that I can’t think of a better term on the top of my head for this concept however, would be interested in good suggestions.
What is the definition you’d prefer people to stick to? Something like “being pushed into actions that have a very low probability of producing value, because the reward would be extremely high in the unlikely event they did work out”?
The Drowning Child argument doesn’t seem like an example of Pascal’s Mugging, but Wikipedia gives the example of:
and I think recent posts like The AI Messiah are gesturing at something like that (see, even, this video from the comments on that post: Is AI Safety a Pascal’s Mugging?).
Yes this is the definition I would prefer.
I haven’t watched the video, but I assumed it’s going to say “AI Safety is not a Pascal’s Mugging because the probability of AI x-risk is nontrivially high.” So someone who comes into the video with the assumption that AI risk is a clear Pascal’s Mugging since they view it as “a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities” would be pretty unhappy with the video and think that there was a bait-and-switch.
I’m not sure the most impactful people need have high alignment. We’ve disagreed about Elon Musk in the past, but I still think he’s a better candidate for the world’s most counterfactually positive human than anyone else I can think of. Bill Gates is similarly important and similarly kinda-but-conspicuously-not-explicitly aligned.
Yes, if you rank all humans by counterfactual positive impact, most of them are not EA, because most humans are not EAs.
This is even more true if you are mostly selecting on people who were around long before EA started, or if you go by ex post rather than ex ante counterfactual impact (how much credit should we give to Bill Gates’ grandmother?)
(I’m probably just rehashing an old debate, but also Elon Musk is in the top 5-10 of contenders for “most likely to destroy the world,” so that’s at least some consideration against him specifically).
I don’t think background rate is relevant here. I was contesting your claim that ‘the people who are most impactful within EA have both high alignment and high competence’. It depends on what you mean ‘within EA’ I guess. If you mean ‘people who openly espouse EA ideas’, then the ‘high alignment’ seems uninterestingly true almost by definition. If you mean ‘people who are doing altruistic work effectively’ then Gates and Musk are , IMO, strong enough counterpoints to falsify the claim.
There are many/most people who openly espouse EA ideas who I do not consider highly aligned.
I feel a desire to lower some expectations:
I don’t think any social movement of real size or influence has ever avoided drawing some skepticism, mockery, or even suspicion,
I think community builders should have a solid and detailed enough understanding of EA received wisdom to be able to lay out the case for our recommendations in a reasonably credible way, but I don’t think it’s reasonable to expect them to be domain experts in every domain, and that means that sometimes they aren’t going to be able to seem impressive to every domain expert that comes to us.
To be frank, it isn’t realistic to be able to capture the imagination of everyone who seems promising even if we make the best possible versions of our arguments. Some people will inevitably come away thinking we “just don’t get it”, that we haven’t addressed their objections, that we’re not serious about [specific concern X] and therefore our point of view is uninteresting. Communication channels just aren’t high-fidelity enough, and people’s engagement heuristics aren’t precise enough, to avoid this happening from time to time.
When some people are weirded out by the way we behave or try to attract new members, it seems to me like sometimes this is just reasonable self-protective heuristics that they have, working exactly as intended. People are creeped out by us giving them free books or telling them to change their careers or telling them that the future of humanity is at stake, because they reason “these people are putting a lot into me because they want a lot out of me”. They’re basically correct about that! While we value contributions from people at a wide range of levels of engagement and dedication, the “top end” is pretty extreme, as it should be, and some people are going to notice that and be worried about it. We can work to reduce that tension, but I don’t think it’s going away.
Obviously we should try our best on all of these dimensions, progress can be made, we can be more impressive and more appealing and less threatening and more welcoming. But I can’t imagine a realistic version of the EA community that honestly communicates about everything we believe and want to do and doesn’t alienate anyone by doing that.
I think it will be really important for EAs to engage in more empirical work to understand how people think about EA. Of course you don’t want people to feel like they’re being fed the results of a script tested by a focus group (that’s the whole point of this post), but you do want to actually know in reliable ways how bad some of these problems are, how things are resonating, and how to do better in a genuine and authentic way. Empirical results should be a big part of this (though not all of it), but right now they aren’t, and this seems bad. Instead, we frequently confuse “what my immediate friends in my immediate network think about EA” with “what everyone thinks about EA” and I think this is a mistake.
This is something Rethink Priorities is working on this year, though we invite others to do similar work. I think there’s a lot we can learn!
Strongly agree with this take. There’s nothing stopping us from getting empirical data here and I think we have no strong reason to expect our personal experiences to generalise or that models we create that aren’t therotietrically or empirically grounded to be correct.
I agree with you, and I think this somewhat supports the OPs concern.
Are most uni groups capable of producing or critiquing empirical work about their group, or about EA or about their cause areas of choice? Are they incentivized to do so at all?
Sometimes yes, but mostly no.
Thank you for writing this. I worry a lot about university groups being led by inexperienced people who have only heard of EA recently, especially given the huge focus on university groups (so, so much more focus than on regional groups or professional groups)! EA seems to be really banking on universities**, so much so that we are kinda screwed if it is done poorly, and turning people off. Some thoughts and theories:
1. Experience of organizers:
I bet the mentorship and training in the new University Group Accelerator Program will help, but also I am not sure how much time a mentor will have, and that still assumes only 25 hours of engaging with EA content. From the website:
I realize a low amount of hours is a given for this role if you want it to happen at all, but still. That could be enough for someone who is a natural conversationalist to integrate a lot of key lessons and have a deep understanding and mental infrastructure, but for a lot of people it won’t be enough that they can field concerns well and not sound like cultists (repeating things rather rote rather than being conversational). And tbh, some people won’t know what high quality content is. Is 25 hours with no focus on animal welfare enough? What about that being the almost sole focus? Or what about if most of those hours are topical discussion with other inexperienced fellows?
I would love to see the training materials for the UGAP program made public on the forum or on request to dedicated EAs, for red-teaming. Red-teaming community building is a great idea!
2. Conversationality and critiques
I absolutely agree about interfacing with people who are naturally critical and them being some of the best prospective members. This also reduces the cult vibe.. organizers should ideally be people who have thought through the problems deeply and definitely didn’t just grab onto the first thing or go with authority. Frankly, I don’t think this personality trait/intellectual inclination is something organizers can or should fake, and it is possible that student organizers should be interviewed for this ability. My heuristic is something like “If they couldn’t hang at a rat or EA group house late-night discussion, they shouldn’t be publicly teaching EA”. (at risk of sounding strict, these are very friendly situations!)
I would love to see some training to uni organisers on how to field rebuttals. Eg, “hm, I actually can’t answer that with the confidence I think it deserves. But I recommend you message X about it!” (are there people who can just field questions? Ask a forum librarian?) or “Just because of time limitations, I really want to circle back to this later with you.. can we chat after the session or over [messenger app]?”
3. On Cults
The cult thing is really problematic. Here are some known aspects of cults we don’t technically fit, but could do to ensure all EAs, especially organizers, are leaning more outside of:
-Zealots: Don’t be one.
-Separation from friends and loved ones: Happens accidentally due to value changes. Mentioning other people and commitments in your life other than EA might go a long way.
-The cult’s philosophy is the one great truth: Stress moral uncertainty and the different approaches to doing good within the movement. Discuss how EA has changed and how the philosophy doesn’t have any prescriptions written in stone except that the community welcomes people who try to do their best for the good of others using evidence and reason.
-One magical leader: No idolizing and err toward being open if you disagree with an expert. Emphasize the decentralized origin of ideas in EA. Also, if you bring up one expert, it is good to bring up others, eg why just peter singer, toby ord, will macaskill, or rob wiblin? Surely you can find a second person to support their claim? Or just say “some well-respected figures think X because Y” and don’t namedrop anybody unless requested.
-Tithes and pushing self-denial or frugality to increase the ability to tithe more: Why is this even promoted among broke college students when the community has funds for a good few years? I’d approach it as something EA ideals have pointed to in the past, and something some people still do. Let them hear about GWWC if they ask. Move on quickly to talking about direct work. EA initially had bad publicity because of talks about money, and we can thank our lucky stars that it isn’t a moral imperative to “fundraise” anymore! It’s great that people have been and are still donating, and thanks to them, organizers are free to use so many other framings in outreach and pitch other things that will be less unpopular and more impactful. So please do!
-Promising a great afterlife or great eventual reward: Immortality via simulation, cryonics, and utopian simulation are probably things to steer a beginner-friendly discussion away from if it happens to go there.
Additionally, I’d love to see some training on how young EAs can talk to their families.. I recently met a wannabe student organizer who told me how tenaciously he was talking to his (Mormon) parents about EA, and cult bells were ringing even in my head. I gave him some advice, but the odds are there are more prospective-organizers out there doing that. As an ex-young-vegan, I get it. But EA really doesn’t need parents lobbying their child’s university that the EA student group is a cult and should be shut down. Nor do we need well-meaning parents posting on social media or sending emails warning their parent friends or religious leaders about EA student groups corrupting their kids.
4. CRM
CRM seems good, but it should be used transparently. Just ask people what opportunities they would like information about and what their favorite cause areas are, and anything else about them they’d like you to know. Say you will keep this information for now, but it can be deleted any time on request. It is so you can send them things like job and fellowship opportunities they will really like, or interesting events and intellectual pieces they will really like. Be clear that you are not affiliated with any opportunities, but just doing it as a helpful service to your members.
5. Maybe we can all can help
A good volunteer opportunity for EAs might be to reach out to your university organizers and try to mentor them a bit. Send them good pieces or teach them how to proactively use the forum and subscribe to the community-building topic tag. Invite them to the slack and facebook groups and share newsletters with them they might not know about. You could even show up to the first or last day of their fellowships if the student organizers think it would help. I am doing a bit of mentoring for University of Texas organizers slightly , but this post makes me want to do moreso.
**Side note, I really don’t get the focus on student outreach in general. At least 4 of the 6 bottlenecks named seem better sourced from professionals and regional connections (management, ability to really figure out what matters most and set the right priorities, skills related to entrepreneurship / founding new organizations, and one-on-one social skills and emotional intelligence) than from universities. Plus young people are probably better at spreading cultural memes, so we might have a bigger reputational risk with them.
| Separation from friends and loved ones: Happens accidentally due to value changes.
I hope by this you mean something like “People in general tend to feel a bit more distant from friends when they realise they have different values and EA values are no exception.” But if you’ve actually noticed much more substantial separation tending to happen, I personally think this is something we should push back against, even if it does happen accidentally. Not just for optics’ sake (“Mentioning other people and commitments in your life other than EA might go a long way”), but for not feeling socially/professionally/spiritually dependent on one community, for avoiding groupthink, for not feeling pressure to make sacrifices beyond your ‘stretch zone.’
Hi Ivy,
Just wanted to hop in re: the University Group Accelerator program. You are definitely hitting on some key points that we have been strategizing around for UGAP. I just want to clarify a few things:
* We see 25 hours as the minimum amount of time engaging with EA ideas before someone should help start a group. Often times we think it should be more but there have been cases of really great organizers springing up after just an intro fellowship. We have additional screening for UGAP groups beyond just meeting the pre-requisites that dive a bit more into the nuances you mentioned around what high-quality content is.
* UGAP has been very much in beta mode but we are hoping to share the training materials from the upcoming round. :) We would be excited to have people red-team these once they are presentable.
Thanks for responding! I’m actually super excited about UGAP and have already recommended the program to student organizers now that your applications are open (applications are open, people!). I do note that the 25 hour time commitment is for “at least one organizer”, but I also think mentoring will go a long way to make those 25+/- hours count for more. That’s great that you do interviews to determine quality and you clarify what quality content is.
Excited to see what comes of it :)
Re: “there have been cases of really great organizers springing up after just an intro fellowship.”
I definitely believe this can happen and am glad you allow for that. What makes someone seem really great — epistemics, alignment/buy-in, skill in a relevant area of study, __?
There are multiple reasons for the focus on student outreach:
Students are early on in their careers. You are much more likely to be able to affect their trajectory because a) they are often still deciding (and may even seek out your advice!) b) they lack sunk cost c) they have access to low-cost opportunities like internships to try out various paths.
Students have large amounts of free time and the enthusiasm/energy of youth. If an aspect of EA sounds interesting to them, they are more likely to read about it. They have more time to volunteer and more time to invest in skilling up.
Top schools provide an opportunity to connect with people at a certain level of talent. These people are much harder to access later in their careers, both because they are busier, but also because they are distributed at many different companies instead of all concentrated on a few campuses. Beyond this, attending events is so much easier as a student and schools have, for instance, O-Days where societies can recruit members.
Besides these theoretical reasons, I expect CEA is basing this on experience and looking at the highest performers in EA and how they became involved in EA. See, for example, this post which notes:
Obviously, that’s cherry-picked, but it’s still illustrative of how impactful uni group organising can be.
I am aware of the reasons, and I still think it has been focused on to the neglect of other things. Perhaps I should have said extreme focus instead. Maybe that is budget consciousness (uni groups have in the past been run by free and cheap volunteers), but it doesn’t seem that should have been a strict consideration for a couple of years now. I’m not saying student groups aren’t good but that given bottlenecks and given CEA’s limited bandwidth, I don’t think it warrants the extreme focus and bullishness I see from many these days, to, I can only assume, the detriment of other programs and other experimentation. Almost all of those students will still be recommended to enter regular careers and gain career capital before they can be competitive for doing direct work, and it is unclear how many students from these groups are even going for direct work on longtermist areas. I think perspectives here might depend on AGI timelines.
Let me also clarify that I am talking about uni groups, as opposed to targeted skilling-up programs hosted at universities. I’m also guessing that that 2015 stanford group was a lot different than the uni groups today. 8 week intro fellowships didn’t exist then
So from the perspective of the recruiting party these reasons make sense. From the perspective of a critical outsider, these very same reasons can look bad (and are genuine reasons to mistrust the group that is recruiting):
- easier to manipulate
their trajectory- easier to exploit their labour
- free selection, build on top of/continue rich get richer effects of ‘talented’ people
- let’s apply a supervised learning approach to high impact people acquisition, the training data biases won’t affect it
Well, haters are gonna hate. Maybe that’s too blase, but as long as we are talking about university groups rather than high schools, the PR risks don’t feel too substantial.
A small thing, but citing a particular person seems less culty to me than saying “some well-respected figures think X because Y”. Having a community orthodoxy seems like worse optics than valuing the opinions of specific named people.
Tbh I’ve had success with this approach. Usually, someone will say “like who?” and then I get to rattle off some names with a clause-length bio without making their eyes glaze over, because they proactively requested the information. Other times they won’t ask because they are more interested in the overall point than who thinks it anyway, and they probably already trsut me by that point. Sometimes I’d actually have to google anyway “well I know one was the head of this org and one was the author of this book, let me look those up” and then people are like “whatever whatever I believe you.” It is the ideas that matter anyway
In general, I think it is good to talk casually, and this kind of wording is very natural for me with the benefit that I don’t screw up my train of thought trying to remember names then anyway. If it isn’t natural for you (and I guess for many EAs it won’t be, now that you mention it) don’t do it
I think she is suggesting that only reading up about one person’s thoughts and treating it like gospel is cult-like and bad, then sharing that singular view gives off cult-like impressions (understandably). Rather, being more open to learning many different people’s views, forming your own nuanced opinion, and then sharing that is far more valuable both intrinsically and extrinsically!
I think it’s pretty clear you shouldn’t be saying “some well-respected figures think X because Y” regardless, that’s like 101 bad epistemics because it’s not referencable and vague.
The focus on student groups is also inherently redflaggy for some people, as it can be viewed as looking for people who have less scepticism and experience.
I’ve been speaking to a number of people in university organizing groups who have been aware of these issues, and almost across the board the major issue they feel is that it seems too conflict-generating/bad/guilt-inducing to essentially tell their friends and peers in their or other universities something like “Hey, I think the thing you’re doing is actually causing a lot of harm, actually.”
I would be very in favor of helping find ways to facilitate better communication between these groups that specifically targets ways they can improve in non-blaming, pro-social and supportive ways.
I wonder if the suggestion here to replace some student reading groups with working groups might go some way to demonstrating that EA is a question.
I don’t even think the main aim should be to produce novel work (as suggested in that post); I’m just thinking about having students practice using the relevant tools/resources to form their own conclusions. You could mentor individuals through their own minimal-trust investigations. Or run fact-checking groups that check both EA and non-EA content (which hopefully shows that EA content compares pretty well but isn’t perfect...and if it doesn’t compare pretty well, that’s very useful to know!)
This feels much closer to how I experienced EA student groups 5-7yrs ago—e.g. Tom and Jacob did exactly this with the Oxford Prioritisation Project, and wrote up a very detailed evaluation of it.
Aye and EA London did a smaller version of something in this space focused on equality and justice.
My first thought on reading this suggestion for working groups was “That’s a great idea, I’d really support someone trying to set that up!”
My second thought was “I would absolutely not have wanted to do that as a student. Where would I even begin?”
My third thought was that even if you did organise a group of people to try implementing the frameworks of EA to build some recommendations from scratch, this will never compare to the research done by long-standing organisations that dedicate many experienced people’s working lives to finding the answers. The conclusion of the project would surely be a sort of verbal participation medal, but you’re best off looking at GiveWell’s charities anyway.
Maybe I’m being overly cynical here. It seems a good way to engage people who could later develop into strong priorities/charity evaluation researchers. I suspect it’s best that any such initiative be administered by people already working to a high standard in those fields for that benefit to be properly reaped, however.
Agreed, hence “I don’t even think the main aim should be to produce novel work”. Imagine something between a Giving Game and producing GiveWell-standard work (much closer to the Giving Game end). Like the Model United Nations idea—it’s just practice.
I’ve been very keen to run “deep dives” where we do independent research on some topic, with the aim that the group as a whole ends up with significantly more expertise than at the start.
I’ve proposed doing this with my group, but people are disappointingly unreceptive to it, mainly because of the time commitment and “boringness”.
Maybe you want to select for the kind of people who don’t find it too boring! My guess, though, is that the project idea as currently stated is actually a bit too boring for even most of the people that you’d be trying to reach. And I guess groups aren’t keen to throw money at trying to make it more fun/prestigious in the current climate… I’ve updated away from thinking this is a good idea a little bit, but would still be keen to see several groups try it.
No no, I still believe it’s a great idea. It just needs people to want to do it, and I was just sharing my observation that there doesn’t seem to be that many people who want it enough to offset other things in their life (everyone is always busy).
Your comment about “selecting for people who don’t find it boring” is a good re-framing, I like it.
Oh yes I know—with my reply I was (confusingly) addressing the unreceptive people more than I was addressing you. I’m glad that you’re keen :-)
Strong +1. This feels much more like the correct use of student groups to me.
This is a great post! Upvoted. I appreciate the exceptionally clear writing and the wealth of examples, even if I’m about 50⁄50 on agreeing with your specific points.
I haven’t been involved in university community building for a long time, and don’t have enough data on current strategies to respond comprehensively. Instead, a few scattered thoughts:
I don’t like using “EA” as a noun. But if we do want to refer to some people as “EAs”, I think your friend has the most important characteristics described by that term.
Using EA’s core ideas as a factor in big decisions + caring a lot about doing good + strong practical bent + working on promising career path = yes, you are someone who practices effective altruism (which seems, to me, like the best definition of “an EA”). You don’t have to attend the conferences or wear the t-shirts to qualify.
Not sure about current doctrine, but my impression is that “HEA” isn’t meant to be a binary category. Based on your statement:
I’d be surprised if even the most literal interpretation of any community-building advice would have an organizer favoring “one person in policy” over “one hundred policy people being interested in EA” (feels an order of magnitude off, maybe?).
Bolding for emphasis: People often overestimate how important “full-time EA people are” to the movement, relative to people who “have seriously considered the ideas and are in touch”.
That’s largely because people who discuss EA online are frequently in the first group. But when it comes to impactful projects, a massive amount of work is done by people who are very focused on their own work and less interested in EA qua EA.
When I see my contacts excitedly discussing a project, it often looks like “this person who was briefly involved with group X/is friends with person Y is now pursuing project Z, and we think EA played a role”. The person in question will often have zero connection with “the EA community” at large, no Forum account, etc.
You see less of this on the Forum because “this person got a job/grant” and “this person has started a new project” aren’t exciting posts unless the person in question writes them. And the non-Forum-y people don’t write those posts!
I got this reaction a lot when I was starting up Yale EA in 2014, despite coming up with all my messaging alone and having no connection to the wider EA community. Requests to donate large amounts of money are suspicious!
I’d expect to see less of this reaction now that donating and pledge-taking get less emphasis than in 2014, especially in college groups. But I think it’s hard to avoid while also trying to convey that the things we care about are really important.
(Doesn’t mean we shouldn’t try, but I wouldn’t see the “donations are a scam” perspective as strong evidence that organizers are making the wrong choices.)
Almost everyone I’ve interacted with in EA leadership/CB is obsessed with good epistemics — they value them highly when recruiting/evaluating people, much more than with any other personal trait (with rare exceptions, e.g. strong technical skills in roles where those are crucial).*
My impression is that they’d be happy to trade a bunch of alignment for epistemic skill/virtue at the margin for most people, as long as alignment didn’t dip to the point where they had no interest in working on a priority problem.
This doesn’t mean that current CB strategy is necessarily encouraging good epistemics. (I’m sure it varies dramatically between and within groups.) It’s possible for a group’s strategy not to achieve the ends they want — and it’s easier to Goodhart on alignment than epistemics, because the former is easier to measure.
But I am confident that leaders’ true desire is “find people who have great epistemics [and are somewhat aligned]”, not “find people who are extremely aligned [and have okay epistemics]”.
*To clarify my perspective: I’ve seen discussion of 100+ candidates for jobs/funding in EA. Alignment comes up often, but mostly as a checkbox/afterthought, while “how the person thinks” is the dominant focus most of the time. Many terms are used — clear thinking, independent thinking, nuanced thinking — but they point to the same cluster of traits.
This is very true!
One good way to hear a wider range of feedback is to have friends and activities totally separate from your EA work who can give you a more “normal” perspective on these things. This was automatic for me in college; our EA group was tiny and there wasn’t much to do, so we all had lots of other stuff going on, and I’d been making friends for years before I discovered EA.
I gather that EA groups are now, in some cases, more like sports teams or music groups — things that can easily consume most of someone’s non-class hours and leave them in a place where most of their friends are in the same club. It’s good to have a close-knit group of altruistic friends, but spending all of your time around other people in EA will limit your perspective; guard against this!
(Also, having hobbies not related to your life’s central purpose seems healthy for a lot of reasons.)
Also very true!
Flagging this because it is very hard to account for properly, I’ve had to adjust my expectation of how hard-to-criticize I am several times (especially after I started getting jobs within EA).
Minor elaboration on your last point: a piece of advice I got from someone who did psychological research on how to solicit criticism was to try to brainstorm someone’s most likely criticism of you would be, and then offer that up when requesting criticism, as this is a credible indication that you’re open to it. Examples:
“Hey, do you have any critical feedback on the last discussion I ran? I talked a lot about AI stuff, but I know that can be kind of alienating for people who have more interest in political action than technology development… Does that seem right? Is there other stuff I’m missing?”
“Hey, I’m looking for criticism on my leadership of this group. One thing I was worried about is that I make time for 1:1s with new members, but not so much with people that have been in the group for more than one year...”
“Did you think there was there anything off about our booth last week? I was noticing we were the only group handing out free books, maybe that looked weird. Did you notice anything else?”
Appreciate your comments, Aaron.
You say: But I am confident that leaders’ true desire is “find people who have great epistemics [and are somewhat aligned]”, not “find people who are extremely aligned [and have okay epistemics]”.
I think that’s true for a lot of hires. But does that hold equally true when you think of hiring community builders specifically?
In my experience (5 ish people), leaders’ epistemic criteria seem less stringent for community building. Familiarity with EA, friendliness, and productivity seemed more salient.
This is a tricky question to answer, and there’s some validity to your perspective here.
I was speaking too broadly when I said there were “rare exceptions” when epistemics weren’t the top consideration.
Imagine three people applying to jobs:
Alice: 3⁄5 friendliness, 3⁄5 productivity, 5⁄5 epistemics
Bob: 5⁄5 friendliness, 3⁄5 productivity, 3⁄5 epistemics
Carol: 3⁄5 friendliness, 5⁄5 productivity, 3⁄5 epistemics
I could imagine Bob beating Alice for a “build a new group” role (though I think many CB people would prefer Alice), because friendliness is so crucial.
I could imagine Carol beating Alice for an ops role.
But if I were applying to a wide range of positions in EA and had to pick one trait to max out on my character sheet, I’d choose “epistemics” if my goal were to stand out in a bunch of different interview processes and end up with at least one job.
One complicating factor is that there are only a few plausible candidates (sometimes only one) for a given group leadership position. Maybe the people most likely to actually want those roles are the ones who are really sociable and gung-ho about EA, while the people who aren’t as sociable (but have great epistemics) go into other positions. This state of affairs allows for “EA leaders love epistemics” and “group leaders stand out for other traits” at the same time.
Finally, you mentioned “familiarity” as a separate trait from epistemics, but I see them as conceptually similar when it comes to thinking about group leaders.
Common questions I see about group leaders include “could this person explain these topics in a nuanced way?” and “could this person successfully lead a deep, thoughtful discussion on these topics?” These and other similar questions involve familiarity, but also the ability to look at something from multiple angles, engage seriously with questions (rather than just reciting a canned answer), and do other “good epistemics” things.
Fwiw, my intuition is that EA hasn’t been selecting against, e.g. good epistemic traits historically, since I think that the current community has quite good epistemics by the standards of the world at large (including the demographics EA draws on). Of course, current EA community-building strategies may have caused that to change, but, fwiw, I doubt it.
I also think that highly engaged EAs may generally be substantially more valuable, meaning that focusing on that makes sense, but would be interested in empirical analyses from community-builders.
I think it could be the case that EA itself selects strongly for good epistemics (people who are going to be interested in effective altruism have much higher epistemic standards than the world or large, even matched for demographics), and that this explains most of the gap you observe, but also that some actions/policies by EAs still select against good epistemic traits (albeit in a smaller way).
I think these latter selection effects, to the extent they occur at all, may happen despite (or, in some cases, because of) EA’s strong interest in good epistemics. e.g. EAs care about good epistemics, the criteria they use to select for good epistemics are in practice the person expressing positions/arguments they believe are good ones, this functionally selects more for deference than good epistemics.
I think it’s simultaneously true that highly engaged EAs are much more valuable, and that community builders shouldn’t focus primarily on maximizing the number of HEAs. This is due to impact having significant dependence on talent and other factors orthogonal to engagement.
This reminds me of Bible Study groups where there where discussion was encouraged but never approved of, some of which I led (badly). I have empathy for those leading these.
As a leader, it is a genuinely hard to balance:
allowing discussion
staying on topic
pointing out the best answers
allowing a safe space for disagreement
I agree with the author on criticisms but I have let a lot of group discussions and I do find it really hard.
My suggestion here is to have 2 people leading the group, one who will take the role of moderator—to ask questions and move the group on. And one who will argue the EA point of view, and at times be shut down by the moderator.
There’s a joke that whatever the question is in Bible Study, the correct answer is always ‘God’, ‘Jesus’, or ‘The Bible’. I think it would be bad if the EA equivalent to that became ‘AI’, ‘Existential risk’ and ‘Randomised controlled trials’ .
On the other hand, discussion relies on people having a shared pool of information, and I think it’s very easy to overestimate how much common information people share. I’ve found in group discussions it’s common that someone who’s not an regular to the discussions will bring a whole set of talking points, articles, authors, ideas etc that I had no idea even existed till then. Which is great, except I don’t know what to say in response except ‘uh, what was the name of that? I’ll have to read into it’ .
Yeah, I recall my university organizing days and the awkwardness/difficulty of trying to balance “tell me about the careers you are interested in and why” and “here are the careers that seem highly impactful according to research/analysis.”
I frequently thought things like “I’d like for people to have a way for people to share their perspective without feeling obligated to defend it, but I also don’t want to blanket-validate everyone’s perspectives by simply not being critical.”
The comment below is made in a personal capacity, and is speaking about a specific part of the post, without intending to take a view on the broader picture (though I might make a broader comment later if I have time).
Thanks for writing this. I particularly appreciated this example:
I’m pretty worried about this. I got the impression from the rest of your post that you suspect some of the big picture problem is community builders focusing too much on what will work to get people into AI safety, but I think this particular failure mode is also a huge issue for people with that aim. The sorts of people who will hear high-level/introductory arguments and immediately be able come up with sensible responses seem like exactly the sorts of people who have high potential to make progress on alignment. I can’t imagine many more negative signals for bright, curious people than someone who’s meant to be introducing an idea not being able to adequately respond* to an objection they just thought of.
Though, to be fair, ‘hang on a sec, let me just check what my script says about that objection’ might actually be worse...
*To be clear, ‘adequately responding’ doesn’t necessarily mean ‘is so much of an expert that will just come up with a perfect response on the spot’. It’s fine to not know stuff, and it’s vital to be able to admit when you don’t. Signposting to a previous place the question has been discussed, or knowing that it will be covered later (if e.g. this comes up in a fellowship) if that is the case, both seem useful. It seems important to know enough about common questions, objections, and alternative viewpoints to be able to do this the majority of the time. If it’s genuinely something that the person running the session has never heard, this is exactly the time to demonstrate good epistemics—being willing to seriously engage, ask follow-up questions, and trying to double crux.
+1 to the concern on epistemics, that is one of my bigger concerns also.
Really excited for the new syllabus! Please do share it when it’s ready :)
Very interesting. I haven’t come into contact with any student groups, so can’t comment on that. But here’s my experiences of what’s worked well and less well coming in as a longtime EA-ish giver in my late 30s looking for a more effective career:
Good
(Free) books: I love books—articles and TED talks are fine for getting a quick and simple understanding of something, but nothing beats the full understanding from a good book. And some of the key ones are being given away free! Picking out a few, the Alignment Problem, The Precipice and Scout Mindset give a grounding in AI alignment, longtermism/existential risk and rational thinking techniques, and once you have a handful under your belt you’re in a solid place to understand and contribute to some discussions. They’re good writers too; it’s not just information transfer. The approach of ‘here’s a free book, go away and read it, here’s some resources if you want to research further’ sounds like the polar opposite of what’s described above. It worked well for me. Maybe a proper ‘EA book starter list’ would help it work even better (there’s a germ of this lurking halfway down the page here, but surely this could be standalone and more loved...)
Introductions culture: People seem happy to give their time up to talk to you after exchanging a couple of messages. After meeting people they’re eager to introduce you to others you might be a good ‘match’ with or at least give leads. Apart from its obvious benefits this is really good for keeping spirits up early on when it might be a bit daunting otherwise.
80k careers guides: Pretty obvious but very well-written and a good starting point.
Jobs boards e.g. 80k, Work on Climate, Facebook/LinkedIn groups: Well-curated and give a clear view of what’s available in the sector and particular roles are generally well-written. On ones where people post their own jobs they almost always follow community norms. Not entirely free from the usual problems (hype, jobs without posted salaries) but better than most. I’ve seen some jobs that are what I want but in other countries, which makes me hopeful I’m looking in the right place, especially if I can also start meeting some more people. Talking of which...
This forum: Smart discussion, some key people on here writing and listening to feedback, seemed welcoming and receptive when I just rocked up and started writing some comments.
Less good
Occasionally, apparent coldness to immediate suffering: I’ve only seen this a bit, but even one example could be enough to put someone off for good. I can see what motivates it, but if a person says ‘I think x is one of the most pressing current problems’, and the response is what seems like a dismissive ‘well, x isn’t a genuinely existential risk so it’s not a priority’, it can come across as a lack of empathy or, at worst, humanity. It’s not the argument itself, as I’ve no issue with ranking charities or interventions and producing recommendations, but more the apparent absolutism and lack of compassion involved (even if, ironically, it could be produced by compassion for an imagined future greater good).
Processes that don’t seem fit for the scale of EA: I’ve bigged up 80k above so I’ll use them as an example here. Ordered a free book, it arrived, got an email later saying ‘ah looks like we have distribution problems, here’s a digital copy while you’re waiting’… then another one saying ‘oops forgot to attach it, here it is’. Signed up to 1:1 careers advice, heard nothing for 3 weeks, then ‘sorry, we can’t do you’, with no explanation. They did connect me with a local organiser, which was great, but didn’t pass on the responses I’d taken some time to think about, so we ended up covering some ground again.
Occasionally insular worldview: This comes from being concentrated in a small number of cities and often graduating from top universities. I linked this piece in another post, but it’s very good, so I’m linking it again.
Neutral but interesting
’Eccentric’ billionaires: Media seem to like this angle but it doesn’t really hold up in practice. The presence of the narrative did lead me to investigate the funding of EA in ways that I might not otherwise have done.
I’m still here, so clearly the good outweighs the rest!
I would really like to ban the term “rounding error”.
I haven’t come across this yet… is it what I think it is?
Yep.
It seems pretty easy to optimise for consequentialist impact and still be more virtuous and principled than most people.
Maybe EA can lead to bad moral licensing effects in some people.
I really like that piece that you linked to. Thanks for including it.
In case anyone isn’t aware of it, that’s very much the demographic that CEEALAR (aka the EA hotel) is trying to support!
I’m curious whether community size, engagement level, and competence might matter less than the general perception of EA among non-EAs.
Not just because low general positive perception of EA makes it harder to attract highly engaged, competent EAs. But also because general positive perception matters even if it never results in conversion. General positive perception increases our ability to cooperate with and influence non-EA individuals and institutions.
Suppose an aggressive community building tactic attracts one HEA, of average competence. In addition, it gives a number of people n a slightly negative view of EA—not a strongly felt opposition, just enough of a dislike that they mention it in conversations with other non-EAs sometimes. What n would we accept to make this community building tactic expected value neutral? (This piece seems to suggest that many current strategies fit this model.)
Thank you for the labor of writing this post, which was extremely helpful to me in clarifying my own thinking and concerns. I plan to share it widely.
“I think it would be tempting to assume that the best of these people will already have intuited the importance of scope sensitivity and existential risk, and that they’ll therefore know to give EA a chance, but that’s not how it works.” This made my heart sing. EA would be so much better if more people understood this.
I found this post really useful (and persuasive), thank you!
One thing I I feel unconvinced about:
For what it’s worth, I’m not sure naturally curious/thoughtful/critical people are particularly more put off by someone trying to persuade them (well/by answering their objections/etc.) than by them explaining an idea, especially if the idea is a normative thesis. It’s weird for someone to be like “just saying the idea is that X could have horrific side effects and little upside because [argument]. Yes I believe that’s right. No need to adopt any beliefs or change your actions though!” That just makes them seem like they don’t take their own beliefs seriously. I’d much rather have someone say “I want to persuade you that X is bad, because I think it’s important people know that so they can avoid X. OK here’ goes: [argument].”
If that’s right, does it mean that maybe the issue is more “persuade better”? e.g. by actually having answers when people raise objections to the assumptions being made?
Seems like the issue here is more being unpersuasive, rather than too zealous or not focused enough of explaining.
I agree with you. Yet I bristle when people who I don’t know well start putting forth arguments to me about what is good/bad for me, especially in a context where I wasn’t expecting it.
I’m much more accustomed to people thinking that moral relativism is polite, at least at first.
Moral relativism can be annoying, but putting forth strong moral positions at eg a fresher’s fair does feel like something that missionaries do.
I like this criticism, but I think there are two essentially disjoint parts here that are being criticized. The first is excess legibility, i.e., the issue of having explicit metrics and optimizing to the metrics at all. The second is that a few of the measurements that determine how many resources a group gets/how quickly it grows are correlated with things that are not inherently valuable at best and harmful at worst.
The first problem seems really hard to me: the legibility/autonomy trade-off is an age-old problem that happens in politics, business, and science, and seems to involve a genuine trade-off between organizational efficiency and the ability to capitalize on good but unorthodox ideas and individuals.
The second seems more accessible (though still hard), and reasonably separable from the first. Here I see a couple of things you flag (other than legibility/”corporateness” by itself) as parameters that positively contribute to growth but negatively contribute to the ability of EA to attract intellectually autonomous people. The first is “fire-and-brimstone” style arguments, where EA outreach tends to be all-or-nothing, “you either help save the sick children or you burn in Utilitarian Hell”, and the second is common-denominator level messaging that is optimized to build community (so things like slogans, manufactured community and sense of purpose; things that attract the people like Bob in your thought experiment), but not optimized to appeal to meta-level thinkers who understand the reasoning behind the slogans. Both are vaguely correlated with EA having commonalities with religious communities, and so I’m going to borrow the adjective “pious” to refer to ideas and individuals for which these factors are salient.
I like that you are pointing out that a lot of EA outreach is, in one way or another, an “appeal to piety”, and this is possibly bad. There might be a debate about whether this is actually bad and to what extent (e.g., the Catholic church is inefficient, but the sheer volume of charity it generates is nothing to sneer at), but I think I agree with the intuition that this is suboptimal, and that by Goodhart’s law, if pious people are more likely to react to outreach, eventually they will form a supermajority.
I don’t want to devalue the criticism that legibility is in itself a problem, and particularly ugh-y to certain types of people (e.g. to smart humanities majors). But I think that the problem of piety can be solved without giving up on legibility, and instead by using better metrics, that have more entanglement with the real world. This is something I believed before this post, so I might be shoe-horning it here: take this with a grain of salt.
But I want to point out that organizations that are constantly evaluated on some measurable parameter don’t necessarily tend to end up excessively pious. A sports team can’t survive by having the best team spirit; a software company will not see any profit if it only hires people who fervently believe in its advertising slogans. So maybe a solution to the problem of appeals to piety is to, as you say, reduce the importance of the metric of “HEA” generation in determining funding, clout, etc., but replace it with other hard-to-fake metrics that are less correlated with piety and more correlated with actually being effective at what you do.
I haven’t thought much about what the best metrics would be and am probably not qualified to make recommendations, but I just for plausibility’s sake here are a couple of examples of things that I think would be cool (if not necessarily realistic):
First, it would be neat (though potentially expensive) if there were a yearly competition between teams of EA’s (maybe student groups, or maybe something on a larger level) to use a funding source to create an independent real-world project and have their impact in QALY’s judged by an impartial third party.
Second, I think it would be nice to make “intramural” forms of existing competitions, such as the fiction contest, Scott Alexander’s book review contest, various super-forecasting contests, etc., and grading University groups on success (relative to past results). If something like this is implemented, I’d also like to see the focus of things like the fiction competition move away from “good messaging” (which smacks of piety) and towards “good fiction that happens to have an EA component, if you look hard enough”.
I think that if the funding culture becomes more explicitly focused on concentration of talent and on real-world effects and less on sheer numbers or uncritical mission alignment, then outreach will follow suit and some of the issues that you address will be addressed.
Thanks for this post! I used to do some voluntary university community building, and some of your insights definitely ring true to me, particularly the Alice example—I’m worried that I might have been the sort of facilitator to not return to the assumptions in fellowships I’ve facilitated.
A small note:
This EA Leaders Forum was nearly 3 years ago, and so talent gaps have possibly changed. There was a Meta Coordination Forum last year run by CEA, but I haven’t seen any similar write-ups. This doesn’t seem to be an important crux for most of your points, but thought would be worth mentioning.
You’d read the Sequences but you thought we were a cult? Inconceivable!
(/sarcasm)
Oddly, while I agree with much of this post (and strong upvoted), it reads to me as evidencing many of the problems it describes! Almost of the elements that make EA seem culty seem to me to hail from the rationality side of the movement: Pascalian reasoning, in-group jargon, hero worship, or rather epistemic deferral to heroes and to holy texts, and eschatology (tithes being the one counterexample I can think of), all of which I see in the OP.
I don’t know what conclusion one is supposed to draw from this, but it disposes me both toward agreeing with your critique and toward greater scepticism that following your recommendations would do much to fix the problem.
I also don’t have any great answers, but I do strongly feel that one can be an extremely valuable EA without having heard of the sequences. I understand the efficiency of jargon, but I think in 90% of EA conversations where I hear it used, communicating more literally would have outweighed the efficiency loss—and that’s without considering the improved social signal eschewing jargon.
As a side note, I wondered while reading this if targeting people at university with such high priority could in itself be a core part of the problem, regardless of how it’s done. What other types of social group have relatively old leaders but primarily target under-20s? I think the answer is ‘plenty of fairly harmless ones’, such as sports enthusiasts, but if that was all I knew about a group it would certainly increase my prior that they were sinister. So maybe another approach worth looking into more is greater recruitment efforts towards older people. I doubt it’s as cost effective unless you think GAI is at most a decade or two away, but it might send an important signal that we’re not just about preying on teenagers.
The hero worship is I think especially concerning and is a striking way that implicit/”revealed” norms contradict explicit epistemic norms for some EAs
Thanks for writing this—this resonates a lot with my experience, as I was also exposed to and very put off from EA in college! But have eventually, slowly, made my way back here :)
I want to add that many of the “disconcerting” tactics community builders use are pretty well-established among community organizers (and larger student groups, like Greek life). So my sense is that the key problem lies in EA using well-proven community building tactics, but implementing them poorly. Having a scripted 1:1, a CRM, intro talks; making leadership asks of younger and newer members; measuring success by gaining new members; and trying our best to connect someone’s interests to the values and goals of our community are all very standard practice in community organizing. (They’re also very sales-y tactics, which is probably why they feel off-putting and slimy. I think most policy and entrepreneur types would be aware of this as long as they had some experience in the field, but perhaps students might not be.)
I’m not sure what exactly EA is doing wrong, or where the line between “wholesome supportive community” and “creepy cult” is, and I’d love to think about this more. My intuition is that EA student groups are fighting an uphill battle; the EA movement is somewhat unique in that it (1) already attracts a certain non-normative niche group of people; (2) asks people to change their careers without any clear offering in return (other than the resources available to help you… change your career); (3) espouses unusual beliefs.
Most of the students I’ve talked to name the community as one of the main reasons they stay in EA, and I wonder if EA would be better off leaning into community-building messaging over cause-area and career messaging, at least on campus.
(Context: I’d consider myself new to the EA community, but I’ve been doing community building and community organizing for ~5 years, including, as a student, teaching student community organizing fellowships and running recruitment for a sorority. I’m also a recovering hyperrationalist of the sort that once would have found EA to be extremely appealing.)
Thanks so much for writing this. As someone interested in starting to do community building at a university, this was helpful to read, especially the Alice/Bob example and the concrete advice. I do really think that EA could stand to be less big on recruiting HEAs. I think there are tons of people who are interested in EA principles but aren’t about to make a career switch, and it’s important for those people to feel welcome and like they belong in the community.
I was going to write “I kind of wish this post (or a more concise version) were required reading for community builders,” and then I thought better of it and took actions about it—namely, sent the link as feedback to the EA Student Group Handbook and made an argument that they should incorporate something like this into their guide for student groups.
A bunch of disorganized thoughts related to this post:
Fast growth still does lots of good, especially if you have short AI timelines. If the current policy of growth brings lots of adverse selection, the optimal policy might change to double the number of top AI safety researchers every 18 months, rather than double the number of HEAs every 12 months.
I think more potential top people are put off by EA groups having little overlap with their other interests, than are suspicious of EA being manipulative. This can be mitigated by focusing more on the object level, like discussion of problems in alignment, altpro, policy, or whatever.
People are commonly made uncomfortable by community-builders visibly optimizing against them. But we have to optimize. I think the solution here is to create boundaries so you’re not optimizing against people. When talking about career changes, I think it’s good to help the person preserve optionality so they’re not stuck in an EA career path with little career capital elsewhere. I’ve also found it helpful to come at 1-1s with the frame “I’ll help you optimize for your values”.
The “Scaling makes them worse” section implies a tension between two causes of epistemic harm. Less variation in culture makes EA more insular, but more variation causes this selection effect where the faster-growing groups might have worse epistemics.
I think pushing people into donations / GWWC pledges by default is a pretty obvious mistake. Pledges can be harmful and have pretty limited impact anyway.
| I think the solution here is to create boundaries so you’re not optimizing against people.
I prefer 80,000 Hours’ ‘plan changes’ metric to the ‘HEA’ one for this reason (if I’ve understood you correctly).
Huh, I’m not familiar with this, can you post a link to an example script or message me it?
I agree that reading a script verbatim is not great, and privately discussed info in a CRM seems like an invasion of privacy.
I’ve seen non-EA college groups do this kind of thing and it seems quite normal. Greek organizations track which people come to which pledge events, publications track whether students have hit their article quota to join staff, and so on.
Doesn’t seem like an invasion of privacy for an org’s leaders to have conversations like “this person needs to write one more article to join staff” or “this person was hanging out alone for most of the last event, we should try and help them feel more comfortable next time”.
I keep going back and forth on this.
My first reaction was “this is just basic best practice for any people-/relationship-focused role, obviously community builders should have CRMs”.
Then I realised none of the leaders of the student group I was most active in had CRMs (to my knowledge) and I would have been maybe a bit creeped out if they had, which updated me in the other direction.
Then I thought about it more and realised that group was very far in the direction of “friends with a common interest hang out”, and that for student groups that were less like that I’m still basically pro CRMs. This feels obviously true for “advocacy” groups (anything explicitly religious or political, but also e.g. environmentalist groups, sustainability groups, help-your-local-community groups, anything do-goody). But I think I’d be in favour of even relatively neutral groups (e.g. student science club, student orchestras, etc) doing this.
Given how hard it is to keep any student group alive across multiple generations of leadership, not having a CRM is starting to seem very foolhardy to me.
I do community building with a (non-student, non-religious, non-EA) group that talks a lot about pretty sensitive topics, and we explicitly ask for permission to record things in the CRM. We don’t ask “can we put you in our database?”; we phrase it as “hey, I’d love to connect you with XYZ folks in the chapter who have ABC in common with you, would you mind if I take some notes on what we talked about today, so I can share with them later?” But we take pretty seriously the importance of consent and privacy in the work that we’re doing.
Also, as someone who was in charge of recruitment at a sorority in college where ~half the student body was Greek-affiliated… yeah, community builders should have CRMs. We just don’t call them CRMs; we call them “Potential New Member Sheet” or something.
It does feel a bit slimy, but I think this is pretty normal, and if done well, not likely to put off the folks we’re worried about.
I get the impression many orgs set up to support EA groups have some version of this. Here are some I found on the internet:
Global Challenges Project has a “ready-to-go EA intro talk transcript, which you can use to run your own intro talk” here: https://handbook.globalchallengesproject.org/packaged-programs/intro-talks
EA Groups has “slides and a suggested script for an EA talk” here: https://resources.eagroups.org/events-program-ideas/single-day-events/introductory-presentations
To be fair, in both cases there is also some encouragement to adapt the talks, although I am not persuaded that this will actually happen much, and that when it does, it might still be obvious that you’re seeing a variant on a prepared script.
I see, I thought you were referring to reading a script about EA during a one-on-one conversation. I don’t see anything wrong with presenting a standardized talk, especially if you make it clear that EA is a global movement and not just a thing at your university. I would not be surprised if a local chapter of, say, Citizens’ Climate Lobby, used an introductory talk created by the national organization rather than the local chapter.
I also misunderstood the original post as more like a “sales script” and less about talks. I also am surprised that people find having scripts for intro talks to be creepy, but perhaps secular Western society is just extremely oversensitive here (which is a preference we should respect if it’s our target audience!)
It’s not just talks (as in presentations), it’s also small-group discussions.
My intuitive understanding of the Alice personality type (independent, skeptical, etc.) is that they are often very entrepreneurial (a skill EA desperately needs), but not usually “joiners”. I have no doubt that a lot could be improved about community building, but there may always be some tension there that is difficult to resolve.
It may be that the best we can hope for in a lot of those cases are people who understand EA ideas and use them to inform their work, but don’t consider themselves EAs. That seems fine to me. Like person 1 in your real life example seems like a big win, even if they don’t consider themselves EA. If the EA intro talk she attended helped get her on that track, then it “worked” for her in some sense.
I’m definitely going to change my attitude to community building, to the extent I am involved with it, as a result of reading this. Making sure that criticisms are addressed to the satisfaction of the critic seems hugely important and I don’t think I had grasped that before.
Thanks for posting this—it was an interesting and thoughtful read for me as a community builder.
This summarised some thoughts I’ve had on this topic previously, and the implications on a large scale are concerning at the very least. In my experience, EAs growth over the past couple of years has meant bringing on a lot of people with specific technical expertise (or people who are seeking to gain this expertise) such as those working on AI safety/biorisk/etc, with a skillset that would broadly include mathematics, statistics, logical reasoning, and some level of technical expertise/knowledge of their field. Often (speaking anecdotally here) these would be the type of people who:
are really good at working on detailed problems with defined parameters (eg. software developers)
are very open to hearing things that challenge or further their existing knowledge, and will seek these things out
will be easily persuaded by good arguments (and probably unlikely to push back if they find the arguments mostly convincing)
These people are pretty easy for community builders to deal with because there is a clear, forged pathway defined in EA for these people. Community builders can say, “Go do a PhD in biorisk,” or “There’s a job open at DeepMind, you should apply for it,” and the person will probably go for it.
On the other hand, there are a whole range of people who don’t have the above traits, and instead have one (or more) of the following traits:
prefer broader, messier problems (eg. policy analysts) and are not great at working on detailed problems within defined parameters (or maybe less interested in these types of problems)
are somewhat open to hearing things that challenge or further their existing knowledge, but might not continue to engage if they initially find something off-putting
can be persuaded to accept new arguments, but are more likely to push back, hold onto scepticism for longer, and won’t accept something simply because it is the commonly held view, even if the arguments for it are generally good
These people are harder for community builders to deal with as there is not a clear forged pathway within EA, and they might also be less convinced by the pathways that do exist. (For example, maybe if someone has these traits a community builder might push them towards working in AI policy, but they might not be as convinced that working in AI policy is important, or that they personally can make a big difference in the field, and they won’t be as easily persuaded to apply for jobs in AI policy.) These people might also feel a bit lost when EAs try to push them towards high-impact work—they see the world in greyer terms, they carry more uncertainty, and they are more hesitant to go “all in” on a specified career path.
I think there is a great deal of value that can be derived if EA can find ways to engage with people with these traits, and I also think people with at least one of these traits are probably more likely to fall into the categories that you highlighted in your post – government/policy experts, managers, cause prioritizers (can’t think of a better title here), entrepreneurs, and people with high social/emotional skills. These are people who like big, messy, broad problems and who may generally take more time to accept new ideas and arguments.
In my community-building role, I want to attract and keep more of these people! I don’t have good answers for how to do this (yet), but I think being aware of the issue and trying to figure out some possible ways in which more people with these skills can be brought on board (as well as trying to figure out why EA might be off-putting to some of these people) is a great start.
“If it’s also sufficiently likely that some people could figure this out and put us on a better path, then it seems really bad that we might be putting off those very people.”
Here! When I was twelve, I spent four years finding the best way to benefit others, then I developed my skill-set to pursue a career in it… 26 years ago. So, I might qualify as one of those motivated altruists who is turned-off by the response they’ve gotten from EA. I think I’m one of the people you want to listen to carefully:
I don’t need funding—I already devote 100% of my time as I choose, and I’m glad to give it all to each cause. I am looking to have the 1-to-2 hour long, 2-to-5 person thoughtful conversation, on literally dozens of existing and EA-adjacent topics. I am not looking for a 30min. networking/elevator-pitch at a conference, because I’m not trying to get hired as a PA. I am not looking for the meandering, noisy, distracted banter at a brief social event. This forum, unfortunately, has presented me with consistent misrepresentations and fallacies, which the commentators refuse to address when I point them out. Slack is similarly incapable of the deeper, thoughtful conversations, with members and outsiders, that fosters insight and understanding.
There are numerous ideas, opportunities, methods, that are going un-noticed because of the barriers placed in front of thoughtful dialogue. It is a burden that should rest upon those EAs who are dismissive of deeper conversation, instead of being the “price I have to pay, to prove myself, before anyone will listen”, as I was most recently told on this Forum.
“There are numerous ideas, opportunities, methods, that are going un-noticed because of the barriers placed in front of thoughtful dialogue. It is a burden that should rest upon those EAs who are dismissive of deeper conversation, instead of being the “price I have to pay, to prove myself, before anyone will listen”, as I was most recently told on this Forum.”
Your last paragraph is exactly what I’m worried about when considering engaging EA and exactly why I bring up “signalling” and “posturing” in my own post. I worry about the maturity of the community, and the seriousness EA has about actually getting things done as opposed to being self-congratulatory on their enlightened approach. I think most seasoned professionals don’t have the patience for this kind of dynamic. However, I’ve yet to determine for myself the extent that this dynamic actually exists in the community.
“Remember that the marginal value of another HEA is way lower than the marginal value of an actual legitimate criticism of EA nobody else has considered yet.”
Thank you for saying it!
I sympathize with this as it does seem like there aren’t currently a ton of these opportunities like this.
This is a pretty strong statement that seems like it would benefit from some examples to support it—though maybe it is beside the point as the forum probably isn’t going to be the “1-to-2 hour long, 2-to-5 person thoughtful conversation” you are looking for anyway.
Thank you for writing this post! I recently had a discussion with some EA intro fellowship participants who experienced that EA is very demanding with expectations on changing your career etc and that it gives a very cultish or religious impression. Some said they are interested in EA and implementing some tools and mindsets in their life but that’s it. I think we should embrace that too
Thanks so much for this extremely important and well-written post, Theo! I really appreciate it.
My main takeaway from this post (among many takeaways!) is that EA outreach and movement-building could be significantly better. I’m not sure yet on the clear next steps, but perhaps outreach could be even more individualized and epistemically humble.
One devil’s-advocate point on your point that “while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made up of such people would automatically be better than this one.” Despite Goodhart’s Law, I think that there is some definition of HEA such that maximizing the number of HEAs is the best practical strategy for cooperative movement-building. Having a lot of dedicated people in a cooperative group is very important, perhaps the most important factor in determining the success of the group. More complicated goals/guidelines for movement-builders are harder to use, both for individuals and for group coordination.
Can confirm that other groups/subcultures have begun to see EA as a deceitful cult because of stuff like this
I’ve seen people make these complaints about EA since it first came to exist.
As EA becomes bigger and better-known, I expect to see a higher volume of complaints even if the average person’s impression remains the same/gets a bit better (though I’m not confident that’s the case either).
This includes groups with no prior EA contact learning about it and deciding they don’t like it — but I think they’d have had the same reaction at any point in EA’s history.
Are there notable people or groups whose liking/trust of EA has, in your view, gone down over time?
Which stuff in particular?
More detail, please.
Chapter 7 in this book had a number of good insights on encouraging dissent from subordinates, in the context of disaster prevention.
Great post. I appreciate the framing around the real gaps in human capital. One additional concern I have is that aesthetics might play a counterproductive role in community building. For example, if EA aesthetics are most welcoming to people who are argumentative, perhaps even disagreeable, then the skill set of “one-on-one social skills and emotional intelligence” could be selected out (relatively speaking).
As a community builder, I’ve lately thought about how much you can or should push and support other volunteers and new participants to engage more with EA. That could be offering 1-1 calls, sending private and group messages about specific opportunities and asking for help in organizing events among else. For context, this is mostly a reflection on what I think we (the other organizers and I) should maybe do in EA Finland.
Arguments for more pushing:
I obviously believe what we’re doing as a community is important and want more people to engage more in EA. My group is competing for the participants’ time with other student groups and free-time activities. If we put effort into it, I think we could ask for more of the community members’ time, by creating more pleasant activities and clear projects and actively encouraging people to join conferences, programmes and more. I can come up with many less impact-driven student or hobby groups that are successful at that. If you together create motivation and momentum and get people to prioritize EA activities I’m pretty sure they will find the needed time. I might sometimes err to the side of being “too respectful” of people’s time when people might be happy to help out and want to be reminded of an upcoming event or need to be encouraged to pursue something difficult.
Against pushing people:
I want people to feel they can engage with EA ideas on their own terms without feeling they are more or less valued based on their commitment. Some people are also genuinely busy and too kind to say no to volunteering even when they should. I think the post also had many great examples of failed attempts to engage people.
I’m eager to get results, and HEA is a relatively easy metric, but it should not come at the expense of the participants’ well-being or rule over their interests. When you’re in the mode of growing your group, you might forget to treat people as individuals and adapt to the circumstances of the discussion.
Some of these problems were discussed in part 4 of Hear This Idea podcast with Andres Sandberg. As far as I remember, he claimed that the growth of EA may slow down because the utilitarian framework may put off people with different ethical fundamentals.
https://hearthisidea.com/episodes/anders
Hi! I personally am interested in EA from the standpoint of government policy as well as social and emotional skills. If anyone has any suggestions on how I can get more involved let me know.
What does “it rubbed off on me” mean here? I’m puzzling over this passage, and I keep thinking of the common usage in which “an idea rubs off on one” means that one adopts that idea. Do you use “it rubbed off on me” to mean that you lost agreement with “it”? What is “it”?
Side note: morality aside, in Europe this is borderline illegal, so seems like a very bad idea.
Can you clarify why you think it’s “borderline illegal”? I assume you are referring to GDPR, but I’m not aware of any reason why the normal “legitimate interest” legal basis wouldn’t apply to group organizers.
Maybe I’m just wrong. I only have a lay understanding of GDPR, but my impression was that keeping any data that people had shared with you without their knowledge was getting into sketchy territory.
The increasing focus on Longtermism and X-risk has made us look cultish and unrelatable.
It was much harder for people to criticise EA as cultish when we were mainly about helping poor people from starving or dying of preventable disease because everyone can see immediately that those are worthy goals. X-risk and Longtermism don’t make the same intuitive sense to people, so people dismiss the movement as weird and wrong.
We should lean back towards focusing on global development
I agree with paragraph 1 and 2 and disagree with paragraph 3 :)
That is: I agree longtermism and x-risk are much more difficult to introduce to the general population. They’re substantially farther from the status quo and have weirder and more counterintuitive implications.
However, we don’t choose what to talk about by how palatable it is. We must be guided by what’s true, and what’s most important. Unfortunately, we live in a world where what’s palatable and what’s true need not align.
To be clear, if you think global development is more important than x-risk, it makes sense to suggest that we should focus that way instead. But if you think x-risk is more important, the fact that global development is less “weird” is not enough reason to lean back that way.
I suspect that it varies within the domain of X-risk focused work how weird and cultish it looks to the average person. I think both A.I. risk stuff and a generic “reduce extinction risk” framing will look more “religious” to the average person than “we are worried about pandemics an nuclear wars.”
I think you make some excellent points. Thanks for writing this.
Great post! Have any specific changes come about as a result of writing this? I’m always curious to better understand the feedback loop between dialogue on the forum and actions in the movement.
I particularly like the points about how EA can be too totalizing and think the movement could benefit from a more pragmatic streak, both with more practical minded professionally experienced people and also honestly more engagement with the pragmatist school of philosophical thought. (Everything seems to be pretty much implicitly consequentialist all the time around here.) I also wonder why can’t there be explicitly space for loose and half joined EA’s: “she got the sense that EA was a bit totalising, like she couldn’t really half-join”
Here’s a bit I wrote on the need for a more open and ecumenical EA movement: https://forum.effectivealtruism.org/posts/NR2Y2B8Y4Wxn8pAS8/towards-a-more-ecumenical-ea-movement
I know of at least one recruiting effort that internally cited this post to describe a thing they could accidentally do and should work hard to avoid. I don’t know what their actual counterfactual was.
I have been in two groups/clubs before. One was a student group, and I was only in a few short meetings. One was a book club. I also only went to a few meetings of the book club. On top of that, I socialize with virtually no one.
I have envisioned how I would facilitate a student EA group. Of course, because of the power of situations to change individual behavior, how I would actually come across and do it in actuality might be different. I thought I would start off a flyer that was a short advertisement with a promise of free pizza. The advertisement might be something like, “Come join the EA group, where we will talk about a range of topics, from global poverty to the world being taken over by AI.” Obviously, I might need more means of outreach. I didn’t explicitly lay out what that would be in my vision, since I thought it might be better to let the process of coalition-building flow naturally, and because I wasn’t sure what logistical challenges I would run into. In my vision, the group wouldn’t be bound by strict rules, but it would be productive. I thought that the sessions could have objectives and wouldn’t just be me talking by myself but involve everyone actively participating, perhaps in a back-and-forth manner and maybe with people breaking off into teams (maybe some teams assuming the role of devils’ advocates). I would want it to be easy going and a hot spot for creativity. Objectives would have been things like debating EA ideas and coming up with causes to prioritize. It would be easy going, just to be easy going, and also for the fact that students would have other obligations. They could caveat assignments/arguments with notes on things they didn’t work on for whatever reason. Then if someone else had time and wanted to, they could pick up work on that thing. I foresaw the group sessions being audio recorded. There could be a group member with the role of recording the sessions. There could be other roles too, like a logistics/supplies role, external relations role, and other roles that could make the group effectively achieve various things. Maybe I or someone else would do a presentation/speech sometimes. I figured every week there could be food and snacks.
I guess the first meeting might be mainly me giving an introductory speech. The introduction doesn’t have to be dogmatic though. I can foresee someone asking a question which turns into a group discussion which interrupts, say, 60% of everything else I had planned for the introduction. I think having the introduction cut off like that might be fine, since all the various EA topics could be addressed eventually in a roundabout way from session to session. In that event, the introductory meeting would just be more in depth than planned on a single point or issue.
Is there a place you should go if you meet one of those particular talent gaps?
I don’t think we have a single “landing page” for all the needs of the community, but I’d recommend applying for relevant jobs or getting career advice or going to an EA Global conference, or figuring out what local community groups are nearby you and asking them for advice.
I feel like this post makes concrete some of the tensions I was more abstractly pointing at in A Keynesian/Hayekian model of community building.