I agree and think this is an argument for investing in cause specific groups rather than generalized community building.
nananana.nananana.heyhey.anon
Re: “there have been cases of really great organizers springing up after just an intro fellowship.”
I definitely believe this can happen and am glad you allow for that. What makes someone seem really great — epistemics, alignment/buy-in, skill in a relevant area of study, __?
Strong +1. This feels much more like the correct use of student groups to me.
I agree with you, and I think this somewhat supports the OPs concern.
Are most uni groups capable of producing or critiquing empirical work about their group, or about EA or about their cause areas of choice? Are they incentivized to do so at all?
Sometimes yes, but mostly no.
Appreciate your comments, Aaron.
You say: But I am confident that leaders’ true desire is “find people who have great epistemics [and are somewhat aligned]”, not “find people who are extremely aligned [and have okay epistemics]”.
I think that’s true for a lot of hires. But does that hold equally true when you think of hiring community builders specifically?
In my experience (5 ish people), leaders’ epistemic criteria seem less stringent for community building. Familiarity with EA, friendliness, and productivity seemed more salient.
I agree with you. Yet I bristle when people who I don’t know well start putting forth arguments to me about what is good/bad for me, especially in a context where I wasn’t expecting it.
I’m much more accustomed to people thinking that moral relativism is polite, at least at first.
Moral relativism can be annoying, but putting forth strong moral positions at eg a fresher’s fair does feel like something that missionaries do.
I think Training for Good is in this niche.
Throwaway account to give a vague personal anecdote. I agree this has gotten better for some, but I think this is still a problem (a) that new people have to work out for themselves, going through the stages on their own, perhaps faster than happened 5 years ago; (b) that hits people differently if they are “converted” to EA but not as successful in their pursuit of impact. These people are left in a precarious psychological position.
I experienced both. I think of myself as “EA bycatch.” By the time I went through the phases of thinking through all of this for myself, I had already sacrificed a lot of things in the name of impact that I can’t get back (money, time, alternative professional opportunities, relationships, etc). Frankly some things got wrecked in my life that can’t be put back together. Being collateral damage for the cause feels terrible, but I really do hope the work brings results and is worth it.
Can you say why?
Are we too cocky with EA funding or EA jobs; should EAs prepare for economic instability?
EA feels flush with cash, jobs, and new projects. But we have mostly “grown up” as a movement after the Great Recession of 2008 and may not be prepared for economic instability.
Many EAs come from very economically and professionally stable families. Our donor base may be insulated from economic shocks but not all orgs or individuals will be in equally secure positions.
I think lower- to -middle performers or newer EAs may overestimate stability and be overly optimistic about their stability and opportunities for future funding.
If that’s true, what should we be doing differently?
I, for one, am really glad you raised this.
It seems plausible that some people caught the “AI is cool” bug along with the “EA is cool and nice and well-resourced” bug, and want to work on whatever they can that is AI-related. A justification like “I’ll go work on safety eventually” could be sincere or not.
Charity norms can swing much too far.
I’d be glad to see more 80k and forum talk about AI careers that point to the concerns here.
And I’d be glad to endorse more people doing what Richard mentioned — telling capabilities people that he thinks their work could be harmful while still being respectful.
Are you hoping to appeal to people who don’t think very analytically, or just to explain clearly that this is a very analytical community and it might not be as accessible or useful or fun for them if they are not also very analytical?
I actually think that some of the offputting words might help prevent bycatch.
I’ve said “helping other beings” before. It sounds a bit odd to some people but is more accurate.
“Help” sounds paternalistic or presumptuous to progressives.
I think conceptualizing job hunts like this for very competitive positions is often accurate and healthy fwiw
I agree that for a lot of people, this won’t be a problem. A lot of EA roles are professionalizing, so people can switch over to traditional careers if they want. (As in, community building is enough like management, event planning, or outreach roles at a lot of traditional orgs that the skills may transfer).
One piece of good advice for most people:
Issue-specific expertise and professional networks don’t transfer well. I’d advise that a good backup plan should include spending time networking with EA-adjacent, and non-EA orgs.
That issue seems inconvenient, but can be overcome with time and planning.
The main caution I want to raise is this:
it’s not always possible for EAs to leave themselves a psychological line of retreat into non-EA roles. Anecdote below to illustrate.
Suppose someone is currently reasonably happy, productive, and has a support system in their non-EA role and social scene. They’re not sure they’ll make it in EA.
Before switching to EA work, it might be worth considering this risk:
suppose doing an EA role for a while fails, and
you can’t get another EA role
and this results in you being miserable and ineffective in almost all subsequent non-EA roles? Is that worth it?
If you’re miserable at work for 1-5+ years or more and have a hard time relating to friends, family, and both EA and non-EA peers during that time, do you have reason to believe your mental health and finances are solid enough to recover, or do you have mental health risks or other risk factors that make that a pretty dangerous bet?
I didn’t know I was taking that bet. It caught me very off guard. It seems so costly to have played this game that it would have been better to not work on EA projects, and to keep my EA participation more casual instead.
Solution? I don’t think we have one yet. I don’t know where in the funnel I could have best been diverted, or how I could have best been supported when I tried to transition back to non-EA work. I imagine EAs will get better at this over time.
Personal Anecdote: Pre- EA, I didn’t enjoy work if it had a low likelihood of a positive outcome/impact. That motivated me to find the most useful things I could do in whichever role I was in. I enjoyed that and was effective at it. That also led me to EA.
After a deep dive into EA projects and social scenes, my definition of impact changed. I was always aware that many “good things” might not actually do good. But the set of things that I saw as plausibly high impact got much smaller. The bar for “having an impact” got much higher. I endorse that.
After a few EA projects, I found I don’t have the right aptitudes for most EA work. This was humbling but mostly fine. I figured I’d just get a non-EA role like the ones I’d had before, and go back to making them a little bit more effective.
When I tried to take this line of retreat though, I found I couldn’t. I hadn’t fully appreciated how hard I would hit a wall when trying to do work that no longer met my bar for impact. All the professional roles I had and was previously reasonably happy and effective in were now below the bar. It seems my brain doesn’t just prefer impact; I found myself fundamentally incapable of work on something full-time when my brain couldn’t see the case for impact. This seemed stupid; surely I should just be able to muscle through and do a traditional job while I skill up in something else and try to move on to something that’s above my impact bar, right? I could not. I tried, but depression and anxiety set in fast without the connection to impact, which decreased performance, which increased depression and anxiety. (Fun spiral, mate /s). I’d need to quit and try again elsewhere. Same story repeated. (This didn’t happen pre-EA).
The instability means EtG isn’t really available as a path either, and I’m not building a strong resume. I became a more frustrated and frustrating person, decreasing the quality of my work and my relationships. This means I don’t have the right mindset anymore to use my aptitudes in non-EA roles either! I’m not sure this is changeable. A lot of the negative impacts are irreversible at this point.
I know at least one other EA in a similar boat. Maybe there are more, or maybe I’m a rare kind of bycatch. I’m not sure if I expect more or fewer cases like mine as EA grows.
Note: I’m not proud of nor endorsing my mindset. I feel a bit stupid for feeling this way and for sharing it. I’ve read Julia Wise pieces about how it’s ok to leave EA being ok and how it’s fine to have more than one goal. I agree, but not at a deep enough level yet to alter my experience at work.
I really like this post. That said, I don’t think this is true: “dedicates don’t have bullshit jobs.” We might have different definitions of bullshit though.
Dedicates don’t take jobs without doing an impact analysis, agreed.
However, dedicates may choose to sacrifice the chance to work 10 hour days on interesting problems, to take strategic jobs in non-EA orgs or government agencies that involve a lot of day-to-day bullshit. They do this in the hopes that they might have a shot at impact when the time is right. I think it’s good that they’re willing to do this and wouldn’t want their sacrifice mistaken for being a non-dedicate.
I don’t think (3) is that bad. New members are not always better than shooting experienced members into good projects.
I wonder if 2- 3 year cohort models of fellows would be better in established campuses.
This seems like one of those things that might be best for the movement but not best for the individual.
A uni organizer who recruits 5 excellent future performers might have just had the most impactful portion of their whole career. But the general marketing skills they got might be less useful to them personally. Becoming an expert in X object level issue would probably be more rewarding and open more doors over the course of their career than being a generalist in marketing, and have lower earning potential than learning consulting, programming, or some research skills.
I feel more uncertain about this if they’re actually doing project management and people management.
Would you have this same reaction if you saw Luke and Max or GWWC/CEA as equals and peers? Maybe so! It seems like you saw this as the head of CEA talking down to the OP. Max and Luke seem to know each other though; I read Max’s comment as a quick flag between equals that there’s a disagreement here, but writing it on the forum instead of an email means the rest of us get to participate a bit more in the conversation too.