This was such a great post and I was nodding along throughout the whole article, except for the part about the importance of hiring people who are “strategically aligned”.
I think that you often need people at the top of the organization to deeply share the org’s ethics and long-term goals, otherwise you find yourself in very-long debates about theories of change, which ultimately affect a lot of the decisions (I wonder if you have experienced this?). The exception to this is when you find non-EA, but exceptional people who share EA goals while also having their own perspectives and motivations, who are quite flexible and open-minded—those can indeed bring a fresh perspective. But I think those people are rare enough that it would make sense to filter at least a little bit in the interview about the long-term goals, ethics, values of the person, and how they would approach the org’s theory of change.
I don’t think we disagree much here, but where we do I’m trying to bottom out the cruxes…
I think it’s primarily risk appetite. I do agree though that the wrong hire can make things hellish, on many levels. But in my experience that’s usually been less driven by what people thought was important and moreso by the individual’s characteristics, behaviours, degree of self-awareness, tendency towards defensiveness / self protection vs. openness. Usually if it doesn’t work out in terms of irreconcilably different views on a problem, people just agree to disagree and move on!
Perhaps we also have different things in our heads as meaningful signals of being a good leader for the org, and maybe different models of how a “signed up to doing good but not every EA doctrinal belief” person would operate.
As mentioned in the post how you (dis)agree is often the most important thing; which reflects what you’re saying about flexible and open-minded people with their own perspectives. I think I stand by the IIDM example, illustrating how you don’t need to be signed up to every EA idea to add a lot of value to an organisation. I think it’s similar for X-risk oriented pandemic preparedness, AI risk, etc; that sometimes the most strategically sound thing to do would be more near-term, but those with a long-term orientation could not have that in their immediate view. Similarly would apply for e.g. deciding which funders / partners to work with; skills / talent requirements within the team; etc.
(That said, if there’s an instinctive feeling that an EA adjacent / non-EA hire—senior or otherwise—could threaten organisational alignment, it’s almost a recipe for unconscious ostracism and exclusion; almost in a self-fulfilling prophecy kind of way. It’s just very human to react negatively to someone who you feel is threatening. So yeah—another thing to reflect on if you are working in an EA org).
Maybe another crux is how much those people are exceptions? As I argued in the post, my hunch is there’s many more people like that who are not getting a shot—referring to the ‘wild card’ example in the post again. I suspect this question could really only be answered by orgs doing the post mortem on their recruitments, to see why people fell off at different stages and if the question is asked (ironically, like in an actual post-mortem) if ‘anything different could have been done?’
This was such a great post and I was nodding along throughout the whole article, except for the part about the importance of hiring people who are “strategically aligned”.
I think that you often need people at the top of the organization to deeply share the org’s ethics and long-term goals, otherwise you find yourself in very-long debates about theories of change, which ultimately affect a lot of the decisions (I wonder if you have experienced this?). The exception to this is when you find non-EA, but exceptional people who share EA goals while also having their own perspectives and motivations, who are quite flexible and open-minded—those can indeed bring a fresh perspective. But I think those people are rare enough that it would make sense to filter at least a little bit in the interview about the long-term goals, ethics, values of the person, and how they would approach the org’s theory of change.
I don’t think we disagree much here, but where we do I’m trying to bottom out the cruxes…
I think it’s primarily risk appetite. I do agree though that the wrong hire can make things hellish, on many levels. But in my experience that’s usually been less driven by what people thought was important and moreso by the individual’s characteristics, behaviours, degree of self-awareness, tendency towards defensiveness / self protection vs. openness. Usually if it doesn’t work out in terms of irreconcilably different views on a problem, people just agree to disagree and move on!
Perhaps we also have different things in our heads as meaningful signals of being a good leader for the org, and maybe different models of how a “signed up to doing good but not every EA doctrinal belief” person would operate.
As mentioned in the post how you (dis)agree is often the most important thing; which reflects what you’re saying about flexible and open-minded people with their own perspectives. I think I stand by the IIDM example, illustrating how you don’t need to be signed up to every EA idea to add a lot of value to an organisation. I think it’s similar for X-risk oriented pandemic preparedness, AI risk, etc; that sometimes the most strategically sound thing to do would be more near-term, but those with a long-term orientation could not have that in their immediate view. Similarly would apply for e.g. deciding which funders / partners to work with; skills / talent requirements within the team; etc.
(That said, if there’s an instinctive feeling that an EA adjacent / non-EA hire—senior or otherwise—could threaten organisational alignment, it’s almost a recipe for unconscious ostracism and exclusion; almost in a self-fulfilling prophecy kind of way. It’s just very human to react negatively to someone who you feel is threatening. So yeah—another thing to reflect on if you are working in an EA org).
Maybe another crux is how much those people are exceptions? As I argued in the post, my hunch is there’s many more people like that who are not getting a shot—referring to the ‘wild card’ example in the post again. I suspect this question could really only be answered by orgs doing the post mortem on their recruitments, to see why people fell off at different stages and if the question is asked (ironically, like in an actual post-mortem) if ‘anything different could have been done?’