Senior EA ‘ops’ roles: if you want to undo the bottleneck, hire differently
I’ve decided to submit this anonymously so I can be as honest as possible without ‘outing’ the orgs I’ve engaged with, or to accidentally poison the well for myself with prospective employers. But if anyone reading this would like to learn more, happy to give more specific advice to you but with strict confidentiality—so feel free to DM.
I suspect it shouldn’t be too hard to ‘out’ me. Please don’t—I think it’s important to be frank about where I’ve seen hiring not go well, and these reflections could hurt those who have inspired them because feedback is something you need to be ready to receive. I do not want this to happen, so I am making more general criticisms and I would not have written this if I thought it would inadvertently target some orgs / people.
Summary
EA is bottlenecked by ‘good senior ops people’; we hear this a lot. But my experience hiring, building and running teams myself and engaging with EA organisations, has made me think that there is a lot of room for improvement when it comes to attracting and hiring these people. So these are personal reflections on what I think a lot of EA orgs get wrong when it comes to hiring these roles and what I think can change to unblock the bottleneck.
The issues:
Many EA org ‘operations specialists’ jobs will not appeal to the calibre of candidate you want because:
the priority tasks are often mundane, not challenging and not the best use of their skills
the role is mostly positioned as “enabling the existing leadership team” to the extent that it seems like “do all the tasks that I / we don’t like”
hiring managers give the impression that they will likely discount the perspectives and skills the new hire could bring
I have a sense of excessive scepticism towards people who are not ‘EA / long-termist enough’ when comes to hiring, but this will prevent you getting high calibre talent who can bring diversity of experience and new ideas. Not ‘EA enough’ is either:
not having worked in a recognised EA / long-termist org before
not being bought into what are considered almost doctrinal EA / long-termist ideas which a reasonable person could have well-grounded objections to
Solutions are given in the final section.
Where am I coming from, and what type of roles am I referring to?
I’ve worked as a generalist in management positions for about nine years, mostly project roles starting up new teams to build / make something. I’ve run teams of 20+ people on complex multi-stakeholder projects, making key strategic decisions and reporting to very senior stakeholders.
In the last few months, I’ve been approached directly many times to apply for fairly senior ‘ops roles’. The type of roles I was engaged over, and that I am talking about in the rest of this post, would be things like...:
Managing Director / Chief Operating Officer / Chief of Staff
Director / Head of Operations
Director / Head of Special Projects
Why this matters?
Having looked at a lot of job roles, a lot of them are unappealing in how they are written or a less than positive impression gets made by the hiring team. So I didn’t apply to many of them.
My assumption is that if I’m feeling this, other good candidates might feel the same.
The skills and attributes that a senior ‘ops person’ will have
Someone qualified to do any of the roles above will have the following skillsets, and will consider their comparative advantage to lie in...:
Strategic thinking, understanding how the organisation should tackle its goals, what its comparative advantage is and what it should leave to others to do
Importantly, the ability to brutally prioritise, challenging other senior team members to ‘kill their darlings’ in order to do the most important work, especially as the wider environment shifts
Leadership, holding space internally to mould alignment, and leading the team through ambiguity, complexity and challenges with people feeling supported; and knowing how to build alliances externally to get things done
Learning quickly and figuring out what to do next; whether that’s on organisational risks (e.g. funding), operational risks (e.g. legal / governance), or new subject matter central to the organisations functions (e.g. that new EA thing everyone’s buzzing about)
Excellent stakeholder management skills
Great organisational culture development skills
Implementing processes that improve how the team works, such as hiring / induction / general HR; processes for drafting and signing off external publications; co-working / virtual working
Coaching, developing and enabling team members to do some of the more simple tasks that fall in the buckets above
Intuition of what leads to success / failure, based on the work they’ve done and having observed other people / teams
However...
Many EA org ‘operations specialists’ jobs will not appeal to ops specialists
Why you ask?
The priority tasks are often mundane, not challenging and not the best use of their skills
I’ve read job descriptions (JD) for roles which are more than 50% skillset 6. Implementing processes.
Sometimes this might be necessary in the short-term while everyone is in ‘start-up mode’ and will change as more hires are made. But it still sends alarm signals that the existing team does not have a good understanding of the skills I outlined which a good hire can bring. If this is the starting point, I think many people qualified to do the role, who know what their comparative advantage is, would feel wasted on it and not inspired to apply. I’ve heard on the grapevine that these specific roles did not find many great applicants, which I think backs up this point.
The role is mostly positioned as “enabling the existing leadership team” to the extent that it seems like “do all the tasks that we don’t like”
Sometimes this is explicitly put in the JD, sometimes it’s the subtext. I’ve read JDs where senior ops people’s role was almost literally “enabling the existing leadership team so they can do strategic work”.
The issues with this JD are, in my opinion:
this would not be the comparative advantage of someone qualified to do such a senior role—again, see “skills and attributes” section
it makes the existing leadership team seem like they’re not very attuned to creating roles that are fulfilling, that would retain staff, and therefore convincing them to focus on developing a good organisational culture could be an uphill battle
people at a certain level of seniority know that they will sometimes know better than other leaders, and will not welcome a position which could minimise their voice
Hiring managers give the impression that they will likely discount the perspectives and skills the new hire could bring
A few common features of senior leadership teams I’ve observed across many EA orgs are:
most of their expertise is on technical subject-matter, and not as much about building the org culture and structure to achieve goals on this technical subject-matter. So they would not really speaking the same language as the new senior ops person, and could easily not get the arguments being made
many are very young and have limited work experience, let alone work experience outside of EA. While we all like to think we are self-aware enough to mitigate this, diversity of experience humbles you; it makes you viscerally aware of what you don’t know, and why you should be more open to others and want their help / input even if it makes you feel uncomfortable / threatened
I think these features mean they’re less likely to get / listen to my perspective, or value the way I did things in a different sector, so I’d be nervous about applying to them. The irony is I’ve been encouraged to apply to these organisations by many EAs because my skills do complement leadership teams like these!
One conversation with a hiring manager illustrates this bit. They more or less said they would not want me involved in the biggest strategic decisions within the org because I was not technical. I found they did not understand that many of my skills were directly transferable, and even when I laid this out they were dismissive and tried to move to another topic. It made me concerned they didn’t see how they might have blindspots in their decision making which outsiders are often good at picking up. It also signalled they didn’t appreciate how my reputation would be tied to the org’s success / failure, and that therefore a seat at the table would be commensurate for this risk.
Excessive scepticism towards people who are not ‘EA / long-termist enough’, or not from orgs in this space
Before I talk about the benefits of hiring people from outside EA, I want to make clear that there are benefits to hiring someone who is well-versed in EA concepts. Knowledge of EA concepts and how to apply them, like the ITN framework or expected value calculations, can inform effective strategic decisions. Being versed in EA / long-termist thinking gives a common language to understand and anticipate other members of their organisation better. But there are costs to seeking people from within EA / long-termism.
Why it can be extra valuable to hire people who have not worked in a recognised EA / long-termist org
One of the main reasons to search for hires ‘outside’ is simply because the number of EA orgs needing these people outstrips the number of EAs with the right experience; especially if you want them to be proven ‘ops people’, in which case they almost definitely won’t have already worked in EA orgs.
But there are also other benefits to seeking ‘ops people’ from outside EA. To refer back to the skills and attributes list:
Excellent stakeholder management skills, and leadership; particularly important as EA orgs acquire more funding and set up more partnerships with non-EA / EA-adjacent organisations. If all you know is building alliances within the EA ecosystem, you’ll be very challenged
Strategic thinking and intuition about what leads to success / failure; long-termist orgs have goals that are harder to benchmark progress against, particularly if they are to do with X-risks or S-risks. So there’s a greater risk of investing in things which seem important or sound good but will not be. But if you if you only hire people whose worldview greatly overlaps with yours, you will not have sufficient challenge. This is why the cliché of “fresh eyes” sums up the value of what an outsider can bring so well, and it’s why I try to hire people outside my current sector when I can.
There’s a bigger literature on how organisational diversity makes creative / problem-solving teams more effective, if you want to read more about this.
Leadership and team cultivation / organisational culture; if your leaders have only worked in EA orgs, their experience of org culture and practices will by definition be narrower. I think this narrowness is reinforced by how org leaders seek information; I hear from friends working in EA orgs that they overwhelmingly seek advice from other EA orgs on things like hiring and human resources processes. Sometimes this will be valuable, especially if there is some really good practice in some orgs. But hiring someone with experience from outside EA opens you to a much wider network, more examples of what good / bad / weird looks like, and what many different really good / bad leaders do. So good ops person in a senior role with the ability to influence will bring new ideas which could be really helpful.
I think the benefits of ‘outsiders’ to this domain is particularly important because, sadly, there are some poorly kept secrets of EA orgs who have had big cultural problems and leadership that was lacking; leading to reputation harms and worse yet high burnout rates and stress among junior team members.
In summary, more models of the world / how to do things well = good!
However, I’ve heard of ‘card-carrying’ EAs described as the ‘wildcard’ in a final hiring rounds because they had not worked in an EA org before. This leads me to think that if the wildcard is an established EA, then the hiring org could not have been thinking broadly enough about how to attract talent from outside EA: both in the initial casting of the net and then how they evaluate people throughout hiring rounds.
Why it can be extra valuable to hire people who are not signed up to all EA / long-termist ideas
There seems to be a lot of worry about hiring people who ‘might not be value-aligned’.
But how do you really get a sense of that? You can ask in an interview what someone thinks about different topics, and how much they converge / diverge from what you think the organisations values are, and hire to maximise alignment and minimise friction. But at what cost? Especially given points I make about how useful it is to have different perspectives.
Here’s some reasons you should reevaluate ‘value-alignment’ requirements:
how you (dis)agree is often more important than what you (dis)agree on. A ‘yes man’ and a belligerent nay-sayer will both cause org disfunction, but in different ways. If people convey their (dis)agreement in ways that are open to others, make uncertainties and risks concrete and cultivate a genuine synthesis, you’re winning. This is why, hypothetically, you can have someone working on e.g. global health and development, who primarily comes from a feminist worldview, who could be a great team asset.
an excellent hire could have good reasons for wanting to work in an EA / long-termist org whilst not being 100% behind the long-term mission, but practically speaking this divergence could be an asset to the org. I think a healthy near-term Vs. long-term tension can be really useful because most of our work is in emerging fields where we are still looking to demonstrate proof of concept and value. A hypothetical illustration...
An improving institutional decision making (IIDM) org with long-termist goals could be very appealing to someone with considerable organisational change knowledge within large bureaucracies.
Such a potential hire might not be 100% sold on long-termism arguments to do with astronomical waste, but agree that IIDM is important for improving the world in the coming decades. They will likely have a common language with external stakeholders in non-EA orgs, that would make them a great asset for partnership building.
If employed, this hire could make good arguments for demonstrating proof of concept within the coming 2-5 years to do with e.g. failing fast, winning mroe resources / alliances through near-term success to do more; and convince a team with a more long-term focus who would otherwise be happy doing more theoretical work.
expecting high convergence in values is somewhat contradictory of EA as a question; and as outlined in the previous section under the “strategic thinking” and “intuition” point, high risks can come about from assuming agreement
However, I’ve had some interviews which felt like a test of how much EA / long-termist doctrine I was signed up to. I think it would have been more valuable if the purpose was to answer “given your epistemic starting point and how you work and behave with others, could you work well with us and add value?”. More openness about any fears / worries on the side of the hiring org is also helpful, in my opinion.
Just to be clear, I do think there is a difference between ‘not remotely value-aligned’ and ‘not 100% convergent on our org values as stated’. I’m advocating EA orgs should be more strategic in how they think about the benefits of convergence / divergence in values and reasoning, and that this will likely mean relaxing the thresholds for what ‘EA / long-termist aligned’ means.
Solutions
For EA orgs hiring, particularly org leadership:
Before starting a hiring round
Research what good looks like for ‘ops roles’ like these outside of EA; this could include start-ups, established companies, think tanks, government, etc
Undertake a gap analysis of the skills, experience and ideas that your organisation is lacking to more strategically think about what someone outside the EA org ecosystem would bring. Build this into your strategy for advertising the role. Key questions:
what are the specific skills which will make a person a better employee and how much are those skills worth compared to other skills?
what are my personal strengths and limitations; not just in skills / thinks I dislike, but ways I see the world and behaviours? Who would complement these?
When it comes to job descriptions (JD):
Clearly articulate the value you see a senior ops role can bring; particularly based on the existing strengths / gaps within your org. You will attract more people if the JD is about empowering talented people to improve your org based on the gaps you see as well as the blindspots you haven’t seen yet
If you want the role to do basic ops tasks, say so clearly but be aware this will discourage a lot of people
When it comes to hiring conversations and job interviews:
Be clear about what you don’t know and what you think this person can bring. Be open and engage in two-way listening. A high calibre person will know their worth, and could easily be discouraged by a team where people might not know what they want, might show less self-awareness of their blindspots or might be dismissive or undermining towards another’s skills.
Use alignment questions in the job interview as an opportunity for two-way listening. Aim to come away with food for thought. Be willing to revise your own model of what good organisational alignment and more importantly organisational strategy looks like
Analyse and benchmark how you’re doing.
Write up a hiring strategy, and put it in front of people in and out of EA orgs for their critique.
Do a post-mortem on important hiring rounds to learn for next time. Things you should try to measure...:
who is reading the application?
who is applying?
who is falling off at each interview / assessment stage, and why?
are there any trends in the people who made it to the final rounds, and what do they tell you about your hiring approach?
Where there’s good lessons learned, share them
Funders and board members:
Read and reflect on the above, and consider encouraging these organisations to integrate this thinking in their hiring processes
- Red Teaming CEA’s Community Building Work by 1 Sep 2022 14:42 UTC; 296 points) (
- Winners of the EA Criticism and Red Teaming Contest by 1 Oct 2022 1:50 UTC; 226 points) (
- There should be more AI safety orgs by 21 Sep 2023 14:53 UTC; 181 points) (LessWrong;
- There should be more AI safety orgs by 21 Sep 2023 14:53 UTC; 117 points) (
- EA is becoming increasingly inaccessible, at the worst possible time by 22 Jul 2022 15:40 UTC; 77 points) (
- Monthly Overload of EA—August 2022 by 30 Jul 2022 14:07 UTC; 46 points) (
- There and back again: reflections from leaving EA (and returning) by 17 Mar 2024 22:59 UTC; 43 points) (
- Hiring non-EA Talent: Pros & Cons by 13 May 2024 16:07 UTC; 37 points) (
Glad to see the number of up-votes, clearly other people were thinking something similar. But it kind of worries me there hasn’t been more dissent or nuance thrown in? So I would invite anyone to speak up who thinks...:
this isn’t as big a problem as I’ve made it come across
the arguments around hiring from outside EA are not that strong
who read this, felt they were the target audience, but didn’t change their mind for whatever reason
It would feel weird if after writing all this out I didn’t come away feeling like I learned something!
(Writing quickly, sorry if I’m unclear)
Since you asked, here are my agreements and disagreements, mostly presented without argument:
As someone who is roughly in the target audience (I am involved in hiring for senior ops roles, though it’s someone else’s core responsibility), I think I disagree with much of this post (eg I think this isn’t as big a problem as you think, and the arguments around hiring from outside EA are weak), but in my experience it’s somewhat costly and quite low value to publicly disagree with posts like this, so I didn’t write anything.
It’s costly because people get annoyed at me.
It’s low value because inasmuch as think your advice is bad, I don’t really need to persuade you you’re wrong, I just need to persuade the people who this article is aimed at that you’re wrong. It’s generally much easier to persuade third parties than people who already have a strong opinion. And I don’t think that it’s that useful for the counterarguments to be provided publicly.
And if someone was running an org and strongly agreed with you, I’d probably shrug and say “to each their own” rather than trying that hard to talk them out of it: if a leader really feels passionate about shaping org culture a particular way, that’s a reasonable argument for them making the culture be that way.
For some of the things you talk about in this post (e.g. “The priority tasks are often mundane, not challenging”, ‘The role is mostly positioned as “enabling the existing leadership team” to the extent that it seems like “do all the tasks that we don’t like”’) I agree that it is bad inasmuch as EA orgs do this as egregiously as you’re describing. I’ve never seen this happen in an EA org as blatantly as you’re describing, but find it easy to believe that it happens.
However, if we talked through the details I think there’s a reasonable chance that I’d end up thinking that you were being unfair in your description.
I think one factor here is that some candidates are actually IMO pretty unreasonably opposed to ever doing grunt work. Sometimes jobs involve doing repetitive things for a while when they’re important. For example, I spoke to 60 people or so when I was picking applicants for the first MLAB, which was kind of repetitive but also seemed crucial. It’s extremely costly to accidentally hire someone who isn’t willing to do this kind of work, and it’s tricky to correctly communicate both “we’d like you to not mostly do repetitive work” and “we need you to sometimes do repetitive work, as we all do, because the most important tasks are sometimes repetitive”.
I think our main disagreement is that you’re more optimistic about getting people who “aren’t signed up to all EA/long-termist ideas” to help out with high level strategy decisions than I am. In my experience, people who don’t have a lot of the LTist context often have strong opinions about what orgs should do that don’t really make sense given more context.
For example, some strategic decisions I currently face are:
Should I try to hire more junior vs more senior researchers?
Who is the audience of our research?
Should I implicitly encourage or discourage working on weekends?
I think that people who don’t have experience in a highly analogous setting will often not have the context required to assess this, because these decisions are based on idiosyncrasies of our context and our goals. Senior people without relevant experience will have various potentially analogous experience, and I really appreciate the advice that I get from senior people who don’t have the context, but I definitely have to assess all of their advice for myself rather than just following their best practices (except on really obvious things).
If I was considering hiring a senior person who didn’t have analogous experience and also wanted to have a lot of input into org strategy, I’d be pretty scared if they didn’t seem really on board with the org leadership sometimes going against their advice, and I would want to communicate this extremely clearly to the candidate, to prevent mismatched expectations.
I think that the decisions that LTist orgs make are often predicated on LTist beliefs (obviously), and people who don’t agree with LTist beliefs are going to systematically disagree about what to do, and so if the org hires such a person, they need that person to be okay with getting overruled a bunch on high level strategy. I don’t really see how you could avoid this.
In general, I think that a lot of your concerns might be a result of orgs trying to underpromise and overdeliver: the orgs are afraid that you will come in expecting to have a bunch more strategic input than they feel comfortable promising you, and much less mundane work than you might occasionally have. (But probably some also comes from orgs making bad decisions.)
FWIW, this is also roughly my take on this post (and I felt a similar hesitation to Buck to objecting, plus being pretty busy the last few days).
I really appreciate your honest response—thanks for sticking your neck out.
I think you’ve layered on nuance / different perspectives which enables a richer understanding in some regards, and in some others I think we diverge mostly on how big a risk we perceive non-EAs in senior roles as presenting relative to value they bring. I think we might have fundamental disagreements about ‘the value of outside perspectives’ Vs. ‘the need for context to add value’; or put another way ‘the risk of an echo chamber from too-like-minded people’ Vs. ‘the risk of fracture and bad decision-making from not-like-minded-enough people’.
I’m going to have a think about what the most interesting / useful way to respond might be; I suspect it would be a bit dull / less useful to just rebutt point by point rather than get deeper, but don’t want to infer your drivers too much. Will likey build on this later in the week.
I agree that this is probably the crux.
First off, again lauding you and CarolineJ for throwing some challenge. I’ve tried to think about how to counter-argue by being critical of your arguments / their implications without sounding like I’m being critical of you two personally. I think it’s hard to do this, especially through this medium, so I apologise if I get this wrong.
So main counter-arguments...:
My point here wasn’t about quantifying how often it happens, or whether it’s a “fair” claim. It’s about wanting people to write job descriptions that will attract more / better candidates than they’re currently doing. Even if it doesn’t apply in 90% of cases, I thought it was important to make the point so people could think about the signals it sends out, as I laid out in the post.
I agree though that all jobs will involve some ‘grunt work’ or enabling others even at the cost of your own projects, and it’s important to signal that; the issue as I outlined is that a good candidate will think “they really don’t know how to get the best of me” if the JD is mostly that.
I think that is often the value they will add—paradigmatic challenge as well as practical insight. It’s not like the entire EA / LT house is built on solid epistemic foundations, let alone very clear theories of change / impact behind every single intervention. (I do think this is a dirty secret we don’t do well of owning up to, but maybe this is for another post). And if there were, it’s not as if people with outside experience wouldn’t do a good job of red-teaming them. I outlined examples in my original post about where outsiders would bring value in emerging / pre-paradigmatic fields, even if they are not fuller EA / LT signed up; I think they should be more strongly considered.
For me personally, I find perspectives of people outside of my field helpful for challenging my fundamental assumptions about how innovation actually works. Them being in the room makes it more likely a) they’ll get listened to, b) they have access to the same materials / research as me so we can see why we’re diverging on what appears to be the same information, and then check / challenge each others assumptions.
I know I should really listen when I feel uncomfortable; when I feel annoyed at what another team member is saying. It usually indicates they’ve struck a nerve of my uncertainty, or pointed out some fundamental assumption I’m resting too heavily upon. I’m not always good at doing it, but when I do it does make our work stronger.
There are some structural / cultural issues which pop up again and again in different workplaces. EA / LT might have more astronomically massive goals than most other orgs, but the mechanics of achieving them will have more in common with other teams / orgs than idiosyncracies; especially given most planning periods / real decisions never go beyond 5-10 years.
I can’t imagine why someone outside EA / LT would be constrained by ‘not being signed up enough’ from contributing to the strategic decision you mention? Some of them are pretty classic whatever team you’re in; like senior / junior researchers. The cultural ones—e.g. implicitly encouraging / discouraging weekend working—would especially benefit from outsiders who have experienced lots of team / org environments, and have better intuition for good / bad workplace cultures. I think so because, again, lots of EA orgs have really messed up their own cultures and left a trail of burn-out from it.
Reading this makes me think two things:
Isn’t that an argument for bringing outsiders into the organisation; so they can acquire the wider context, weigh it up along with their experience from other situations—some analogous along the lines you think and some analogous along different lines than you’d expect—and then add their thoughts to yours to come down on a decision? My bet is that would be more valuable to you than someone more similar to you doing the same to make those strategic decisions collaboratively.
I’m always sceptical about arguments that can be boiled down to “our context is different” or “our organisation is unique”. We all think that about our teams / orgs; we all think our constraints and challenges are very specific, but in reality most things going well / badly can be explained by some combination of clarity of purpose / vision, clarity of roles and responsibilities, and how people are actually working together[1].
But I hope this would be a two-way street? To be fair, I don’t know of any recruitment where this kind of negotiation of terms doesn’t happen to begin with. That’s normal. People recruit ‘outsiders’, different levels of autonomy are given, they see if they can work together, if it doesn’t they part ways; usually with the newbie leaving but sometimes with the incumbent leaving as the vision of the newbie carries more sway...
Working well with people you don’t 100% agree with is very possible, esp. if you’re optimised your hiring for certain qualities like openness and abilities to argue but maintain cohesion. It’s also just a really important leadership skill to build for most contexts.
Reading this, I felt a little like “what’s wrong about that?” Or more specifically “what’s wrong about having systematic disagreements?” Conflict / disagreements are good things! The more fundamental the better, provided you’re not rehashing the same ground on repeat (and there is a subtle difference between seeing the same argument ad nauseum compared with the same justifiable tensions being played out in different problem / solution spaces).
However, if your assumption is that the non-LT person would be overruled consistently, then yes that would be a problem because then you’re sacrificing the opportunity for synthesis or steel-manning. I feel that if I’m making really good LT arguments with a good theory of change, I should be able to convince someone who isn’t 100% signed up to that to come on the journey with me. If that person was being overruled again and again without any form of synthesis emerging, I’d take that as a sign that I was doing a rubbish job of listening and understanding another person’s perspective.
I think if I was working in such an LT org I would be terrified of not having a detractor in the room. This is because I think a lot of LT Vs. near-term conflict happens because of a lack of concrete theories of change being put forward by LTists. I would want a detractor challenging the assumptions behind my LT decisions, steel-manning at every step of the way. Personally, I would want them jumping up and down at me if it sounded like I was willing to sacrifice a lot of near-term high probability impact in the service of more spurious long-term gains. If I am going to do that, I want to be really confident I made that decision for the right reasons; I want to be so confident that the detractor can plausibly agree with me, and even leave the org on good terms if they could understand the decision being made but couldn’t abide by it.
I am sceptical that an advisor from outside the org would give that level of challenge; they don’t have the full context, they are not as invested in thinking about these things, they won’t have the bandwidth to really make the case. And I’m very sceptical that just really smart, conscientious people who also happen to share the same set of assumptions as me will do as good a job steelmanning. In practice, they should be able to, but in reality assumptions / blindspots need people who don’t have them to point them out.
I think two additional cruxes I’m realising we have might be:
I think really good organisational leadership is hard, and requires experience—ideally lots of experience, doing well and badly and reflecting on yourself and what it’s like to work with you. I think the leadership in many EA orgs have not had those opportunities, which I think is a big risk. But I think you and perhaps others see this as less the case?
I trust my judgement less if I am not getting serious challenge from people on strategic decisions, that shakes me both on the process and on the paradigm / values-system. Because I don’t think it’s possible I know everything, but that wisdom of crowds should get to better solution space. I think I might be unusually strong in this tendency, but I won’t put words in yours / others mouths on what the opposite looks like.
In fact, last two weeks I received a tonne of highly critical challenge fundamentally about whether my programme had a plausible theory of change. Though initially frustrating, it’s helped me see my work for what it is; ultimately experimental, and if it doesn’t work it must be killed off.
To finish, this is the type of thing I’d usually chat over in a tone / setting that’s relaxed and open and less point-by-point rebuttaly; because I think this type of topic is better as an open, reflective conversation but this medium sets it up to feel more confrontational. Ultimately, I don’t know what’s going on in anyone else’s head, or their full context, so I can only make observations / ask questions / pose challenge and hope they feel useful. Because I don’t think the medium can do this topic justice, I would be open to exploring in a different medium if need be.
And yes, part of my deciding to go for the delicate core is inspired by Scott Alexander’s recent post, and reinforced by Helen’s more recent post.
I totally over-summarised this bit in haste, but there’s tonnes of org literature on this, much of the best I read is by McKinsey on organisational health.
This counter-response updated my views quite a bit, thanks! Maybe a way to reconcile the two views in a specific example:
Strategical alignment/understanding: Ultimately at some level the answer to this question depends on what the organisation is trying to achieve and the philosophical trade-offs its leadership are willing to make.
Experience: But in order to make that decision, you have to know what the effects of more or less working on the weekends would actually be. And effects on organisational culture can be gradual, so you need to have observed this process to be able to predict it well.
You don’t necessarily need to have both capacities in the same person, as long as there’s appreciation of the respective areas of expertise.
tl;dr: I suspect selecting for speaking fluent LessWrong jargon is anti-correlated with being exceptionally good at ops.
Double-cruxing in the comments is so satisfying to read. I’ve outlined another possible related component in this comment that this post might have been pointing to (I find the below gets easily conflated in my head unless I do work to tease out the distinct pieces and so I thought this might plausibly be happening a bit here too).
I am curious how much actual values matter versus being good at EA/rationalist signalling mattering.
Eg. I think people take what I say a lot more seriously when I use jargon compared to when I say the same core idea in plain English or with a more accessible framing (which usually is a lot more work: saying what I believe is true in language I think a smart newcomer with a different academic background could understand is hard but often seems worth it to me).
I can imagine someone who gets vibed as a “normie” because of the language and framing they use (because they’ve spent more time in the real world than the EA world where different wording gets social reinforcement) being dismissed despite caring about the long-term future and believing AI is a really big deal. Would you even get many applicants to orgs you’d be hiring at that didn’t buy into this at least a little?
The reason I find the use of language being a bigger deal than actual values plausible is that I have humanities friends who are quicker than me at picking up new concepts (teaching maths to them is both extremely fun and a tiny bit depressing because they pick things up much quicker than I ever did, turns out group theory is easy, who knew? As an aside, group theory is my favourite thing to teach to smart people with very little maths background because there are quite a few hard-ish exercises that require very little pre-requisite knowledge).
They have great epistemics in conversations with me where there are high enough levels of trust that very different worldviews don’t make “the other person is trolling/talking in bad faith because they can’t see what just obviously logically follows” the number 1 most plausible hypothesis when everyone is talking past each other.
I couldn’t bring them to an EA event because when I simulate their experience, I see them being dismissed as having poor reasoning skills because they don’t speak fluent STEM or analytic philosophy.
I know that speaking rationalist is a good signal—I think speaking like a LessWronger tells you the person has probably read a lot of LessWrong and, therefore, you are likely to have a lot of common ground with them. However, it does limit the pool of candidates a lot if finding LessWrong fun to read is a prerequisite requirement (which probably requires both being exceptionally analytical and exceptionally unbothered by confrontational communication styles, among probably other personal traits that might not correlate well with ops skills).
I suspect for ops roles, being exceptionally analytical might not be fundamentally needed to be amazing (and I don’t find it implausible that being exceptionally analytical is anti-correlated with being mindblowingly incredible at ops).
I don’t normally think you should select for speaking fluent LessWrong jargon, and I have advocated for hiring senior ops staff who have read relatively little LessWrong.
Great (and also unsurprising so I’m now trying to work out why I felt the need to write the initial comment)
I think I wrote the initial comment less because I expected anyone to reflectively disagree and more because I think we all make snap judgements that maybe take conscious effort to notice and question.
I don’t expect anyone to advocate for people because they speak more jargon (largely because I think very highly of people in this community). I do expect it to be harder to understand someone who comes from a different cultural bubble and, therefore, harder to work out if they are aligned with your values enough. Jargon often gives precision that makes people more legible. Also human beings are pretty instinctively tribal and we naturally trust people who indicate in some way (e.g. in their language) they are more like us. I think it’s also easy for these things to get conflated (it’s hard to tell where a gut feeling comes from and once we have a gut feeling, we naturally are way more likely to have supporting arguments pop into our heads than opposing ones).
Anyway, I feel there is something I’m pointing to even if I’ve failed to articulate it.
Obviously EA hiring is pretty good because big things are getting accomplished and have already happened. I probably should have said initially that this does feel quite marginal. My guess as an outsider is that hiring is, overall, done quite a bit better than at the median non-profit organisation.
I think the reason it’s tempting to criticize EA orgs is because we’re all more invested in them being as good as they can possibly can be and so want to point out perceived flaws to improve them (though this instinct might often be counter-productive because it takes up scarce attention, so sorry about that!).
I have related thoughts on over-selecting for one single good-but-not-the-be-all-and-end-all trait (being exceptionally analytic) in the EA community in response to the ridiculously competent CEO of GWWC’s comment here: https://forum.effectivealtruism.org/posts/x9Rn5SfapcbbZaZy9/ea-for-dumb-people?commentId=noHWAsztWeJvijbGC
Part of why I think selecting for this trait is particularly bad if you want to find amazing ops people is I have a hunch that executive dysfunction is very bad for ops roles. I also suspect that executive dysfunction makes it easier to trick people in conversation/interviews into thinking you’re really smart and good at analytic reasoning.
I actually think that executive dysfunction and interest in EA are so well correlated that we could use ASD and ADHD diagnostic tools to identify people who are pre-disposed to falling down the EA/LessWrong/longtermism rabbit-hole. (I’m only half-joking, I legitimately think finding people without any executive dysfunctional traits who are interested in EA might be the root cause of some talent bottlenecks)
I am biased because I think I’m quite good at “tricking” people into thinking I’m smart in conversation and that halo-effecting me in EA has been bad and could have been a lot worse (cos at age 20 you just believe people when they tell you your doubts are imposter syndrome and not just self-awareness).
Don’t get me wrong, I love me (well, I actually have a very up and down relationship with myself but I generally have a lot more ups than downs and I have an incredible psychologist so hopefully I won’t need a caveat like this in future 🤣🤞).
I just think that competency is more multi-dimensional than many rationalist people seem to alieve.
This was such a great post and I was nodding along throughout the whole article, except for the part about the importance of hiring people who are “strategically aligned”.
I think that you often need people at the top of the organization to deeply share the org’s ethics and long-term goals, otherwise you find yourself in very-long debates about theories of change, which ultimately affect a lot of the decisions (I wonder if you have experienced this?). The exception to this is when you find non-EA, but exceptional people who share EA goals while also having their own perspectives and motivations, who are quite flexible and open-minded—those can indeed bring a fresh perspective. But I think those people are rare enough that it would make sense to filter at least a little bit in the interview about the long-term goals, ethics, values of the person, and how they would approach the org’s theory of change.
I don’t think we disagree much here, but where we do I’m trying to bottom out the cruxes…
I think it’s primarily risk appetite. I do agree though that the wrong hire can make things hellish, on many levels. But in my experience that’s usually been less driven by what people thought was important and moreso by the individual’s characteristics, behaviours, degree of self-awareness, tendency towards defensiveness / self protection vs. openness. Usually if it doesn’t work out in terms of irreconcilably different views on a problem, people just agree to disagree and move on!
Perhaps we also have different things in our heads as meaningful signals of being a good leader for the org, and maybe different models of how a “signed up to doing good but not every EA doctrinal belief” person would operate.
As mentioned in the post how you (dis)agree is often the most important thing; which reflects what you’re saying about flexible and open-minded people with their own perspectives. I think I stand by the IIDM example, illustrating how you don’t need to be signed up to every EA idea to add a lot of value to an organisation. I think it’s similar for X-risk oriented pandemic preparedness, AI risk, etc; that sometimes the most strategically sound thing to do would be more near-term, but those with a long-term orientation could not have that in their immediate view. Similarly would apply for e.g. deciding which funders / partners to work with; skills / talent requirements within the team; etc.
(That said, if there’s an instinctive feeling that an EA adjacent / non-EA hire—senior or otherwise—could threaten organisational alignment, it’s almost a recipe for unconscious ostracism and exclusion; almost in a self-fulfilling prophecy kind of way. It’s just very human to react negatively to someone who you feel is threatening. So yeah—another thing to reflect on if you are working in an EA org).
Maybe another crux is how much those people are exceptions? As I argued in the post, my hunch is there’s many more people like that who are not getting a shot—referring to the ‘wild card’ example in the post again. I suspect this question could really only be answered by orgs doing the post mortem on their recruitments, to see why people fell off at different stages and if the question is asked (ironically, like in an actual post-mortem) if ‘anything different could have been done?’
For another counterargument to your point about the fact that some positions don’t look attractive to people who are overqualified, here’s Ben West’s article. I personally think that making the position challenge and a growth opportunity make people more motivated and excited.
(speaking for myself) My reasoning for disagreeing is mostly prosaic. People keep telling me hiring senior ops people is hard, and trying to get longtermist alignment on top of that is ~impossible, but when Rethink Priorities actually tried to do such hiring, the process has mostly been a lot smoother than I expected (note that I’m only indirectly involved in the ops hiring).
So I just kind of assume that people are talking about different roles/skillsets when they claim there are such large bottlenecks in ops talent. This exchange between Abraham Rowe (Rethink’s COO) and other commentators may be instructive for differing standards for what skillsets that senior ops work entails.
To be clear there are times where amazing ops skills (really, executive/entrepreneurship abilities) can be somewhat of a bottleneck for my work, e.g. finding a top-notch civilizational refuges CEO. But in those cases most of the problems you list are clearly not applicable (the refuges CEO is not expected to play second-fiddle to anyone).
Thanks for that response—interesting to hear others aren’t struggling! Would be cool if you / your org shared more about this.
One thing that occurs to me is that your post assumes that the only way to address the issues raised here is to hire different people and/or give them different responsibilities. But another possible route is for EA organizations to make more use of management consultancies. That could be a path worth considering for small nonprofits whose leaders mainly do just want to hire someone to take care of all the tasks they don’t want to do themselves, and whose opportunity to make use of more strategic and advanced operations expertise is likely to be too sporadic to satisfy an experienced operations professional, especially one whose experience is mostly with larger organizations or companies and who is not strongly aligned with EA values. Said experienced ops pros could in turn perhaps do more of the work they want to do (and be better paid for it) working for a consultancy rather than in-house at a small organization.
I know there have been some efforts to get an EA-branded management consulting agency going since Luke’s post last year but am not aware of any of them hitting paydirt quite yet—happy to connect you or others interested to relevant people as appropriate. The main barrier as I understand it so far has been EA orgs’ lack of demonstrated demand for the services, but I wouldn’t necessarily take this as a signal that the resources are already there in-house or that there would be no benefit to the organizations from accessing them.
Well thanks for putting this brain worm into my ear. As I’m trying to make a decision between more project management for the Government or going back to the private sector, this looks very appealing.
Strongly upvoted.
I’m really glad that you wrote this. I’ve had generally similar thoughts during the past year as I’ve looked at various operations manager roles and been invited to apply to various operations manager roles. One interviewer (who seemed young and seemed to have limited work experience) ignored the complexities and suggestions that I raised, which gave me a bad impression. There is some overlap with the perennial issue of people not knowing how to conduct interviews, or how to run a hiring process in general.
I especially agree with the idea that too many operations manager job postings describe tasks handling administrative tasks that the other senior staff don’t want to handle, and the over-emphasis on EA-alignment.
This post has inspired me to post a similar piece about operations that I’ve been sitting on for several months.
Glad to hear.
I skimmed your post, and felt it added a lot of value to mine as it highlighted a big difference between ‘ops roles’ in an operation—i.e. keeping the show on the road—Vs. projects—i.e. starting up a new show and running it at the same time. Very different skillsets again, because of difference levels of uncertainty; one more about optimising, one more about figuring out what the hell do we do?
I think the ops skillset I set out is much more project-oriented, so would welcome more of your critque of hiring approaches from a more ops-y less project-y perspective.
My thoughts aren’t really well-formed on this so you should take all this with a grain of salt. These are just my own perspectives rather than some kind of a well-researched argument.
If I recall correctly (and I might be misremembering this) the main difference between operations and projects is that operations are ongoing and projects have specific end dates. I view them as very similar; a handful of skillsets are distinct, but there is massive overlap.
My current view is that an aspect of the hiring difficulty in EA isn’t so much ops-y versus project-y; it is more so an organization that knows what they will be doing versus an organization that doesn’t know what they will be doing.
I can easily imagine a more established team (even a smallish organization with less than 50 people) hiring a project manager and doing it well if they have a clear idea of what kinds of projects they will be doing. I think the difficulty of hiring that many small organizations have is that they don’t have a clear idea of what projects the new hire would be doing. They have vague ideas, but even those vague ideas aren’t very reliable because things might shift.
To make it more concrete, if my team is distributing bednets, or sending engineers to rural India to design wells, or even researching AI safety, we have a pretty clear idea of what a new hire would be doing and what skills are helpful. But if my team is trying to figure out how to do the most good in CAUSE_AREA, or how to have the most impact at the intersection of longtermism and TOPIC, or even trying to figure out how to best nudge young people into impactful careers, then both our future projects and our future operations are really nebulous. Once the “figuring out” stage is done[1] then hiring will be a lot easier because you have a more concrete idea regarding what the task is.
In reality I guess it is never really done, but for purposes of simplicity we can imagine a stage at which a lot of stuff has been figured our decently confidently and figuring things out isn’t as big of a focus anymore.
Thanks for this, I think you’re spot on about the roles not always being attractive. I’ve heard about a senior person not applying for senior EA ops jobs because they involved managing the CEO’s inbox or because they didn’t have someone who’d do more mundane tasks but the org was already too big for just one person to handle everything.
While managing inbox on short-term ‘needs must’ basis, making that ask of a very senior person is quite a red flag!
Thank you for writing this article. It confirms some of my suspicions.
I’m an investor (E2G), so I spend a lot of my time reviewing different companies and assessing their leadership.
My view is that EA puts too much weight on youth and intellect. For the first ops role in an organisation, why not hire somebody with no degree who’s done basic ops for SMEs for 20-30 years? Only when you’re hiring your 2nd / 3rd ops person do you then need to bring in a leader who can think strategically about ops, and their most important skill needs to be people management. Too often I see very smart people doing mundane tasks with no career opportunities.
Thanks for thinking through this. Did you give this feedback directly to the people and teams you interacted with? If so and if possible to share without identifying too much—how did that go?
In deliberately vague summary, where I could and to greater or lesser degrees depending on how much circumstances permitted it.
Thank you for writing up this post. I’m currently interviewing for EA ops roles and it was very helpful to read this (I’m an EA outsider aside from reading books on the topic).
We’re hiring at the moment at The Center for Election Science for an operations director. We are open to EA and non EA applicants alike. We’d like to be able to pay more than $65K with a larger budget but we provide good benefits and are transparent. We are constantly trying to improve our process. If you have any feedback, feel free to share.
https://electionscience.org/ces-updates/were-hiring-a-director-of-operations-and-outreach/
I’ll take a look—any time you’d like me to get back to you by?
Within a week would be best as the opening closes within a few weeks.