I don’t think we disagree much here, but where we do I’m trying to bottom out the cruxes…
I think it’s primarily risk appetite. I do agree though that the wrong hire can make things hellish, on many levels. But in my experience that’s usually been less driven by what people thought was important and moreso by the individual’s characteristics, behaviours, degree of self-awareness, tendency towards defensiveness / self protection vs. openness. Usually if it doesn’t work out in terms of irreconcilably different views on a problem, people just agree to disagree and move on!
Perhaps we also have different things in our heads as meaningful signals of being a good leader for the org, and maybe different models of how a “signed up to doing good but not every EA doctrinal belief” person would operate.
As mentioned in the post how you (dis)agree is often the most important thing; which reflects what you’re saying about flexible and open-minded people with their own perspectives. I think I stand by the IIDM example, illustrating how you don’t need to be signed up to every EA idea to add a lot of value to an organisation. I think it’s similar for X-risk oriented pandemic preparedness, AI risk, etc; that sometimes the most strategically sound thing to do would be more near-term, but those with a long-term orientation could not have that in their immediate view. Similarly would apply for e.g. deciding which funders / partners to work with; skills / talent requirements within the team; etc.
(That said, if there’s an instinctive feeling that an EA adjacent / non-EA hire—senior or otherwise—could threaten organisational alignment, it’s almost a recipe for unconscious ostracism and exclusion; almost in a self-fulfilling prophecy kind of way. It’s just very human to react negatively to someone who you feel is threatening. So yeah—another thing to reflect on if you are working in an EA org).
Maybe another crux is how much those people are exceptions? As I argued in the post, my hunch is there’s many more people like that who are not getting a shot—referring to the ‘wild card’ example in the post again. I suspect this question could really only be answered by orgs doing the post mortem on their recruitments, to see why people fell off at different stages and if the question is asked (ironically, like in an actual post-mortem) if ‘anything different could have been done?’
I don’t think we disagree much here, but where we do I’m trying to bottom out the cruxes…
I think it’s primarily risk appetite. I do agree though that the wrong hire can make things hellish, on many levels. But in my experience that’s usually been less driven by what people thought was important and moreso by the individual’s characteristics, behaviours, degree of self-awareness, tendency towards defensiveness / self protection vs. openness. Usually if it doesn’t work out in terms of irreconcilably different views on a problem, people just agree to disagree and move on!
Perhaps we also have different things in our heads as meaningful signals of being a good leader for the org, and maybe different models of how a “signed up to doing good but not every EA doctrinal belief” person would operate.
As mentioned in the post how you (dis)agree is often the most important thing; which reflects what you’re saying about flexible and open-minded people with their own perspectives. I think I stand by the IIDM example, illustrating how you don’t need to be signed up to every EA idea to add a lot of value to an organisation. I think it’s similar for X-risk oriented pandemic preparedness, AI risk, etc; that sometimes the most strategically sound thing to do would be more near-term, but those with a long-term orientation could not have that in their immediate view. Similarly would apply for e.g. deciding which funders / partners to work with; skills / talent requirements within the team; etc.
(That said, if there’s an instinctive feeling that an EA adjacent / non-EA hire—senior or otherwise—could threaten organisational alignment, it’s almost a recipe for unconscious ostracism and exclusion; almost in a self-fulfilling prophecy kind of way. It’s just very human to react negatively to someone who you feel is threatening. So yeah—another thing to reflect on if you are working in an EA org).
Maybe another crux is how much those people are exceptions? As I argued in the post, my hunch is there’s many more people like that who are not getting a shot—referring to the ‘wild card’ example in the post again. I suspect this question could really only be answered by orgs doing the post mortem on their recruitments, to see why people fell off at different stages and if the question is asked (ironically, like in an actual post-mortem) if ‘anything different could have been done?’