I guess I weakly disagree: I think that motivation and already having roots in an issue really are a big part of personal fit—especially now that lots of “classic EA jobs” seem highly oversubscribed, even if the cause areas are more neglected than they should be.
Like to make this more concrete, if your climate-change-motivated young EA was like ‘well, now that I’ve learnt about AI risk, I guess I should pursue that career, ?’, but they don’t feel excited about it. Even if they have the innate ability to excel in AI safety, they will still have to outcompete people who have already built up expertize there, many of whom will find it easier to motivate themselves to work hard because they are interested in AI.
(On the object level, I assume that many roles in climate change and gender equality stuff are in fact more impactful than many roles in more canonical EA cause areas).
I definitely agree that “some people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit for”. But others could benefit from the opposite consideration, especially when taking into account moral and epistemic uncertainty about the relative value of different cause areas, and replaceability in areas where they would be limited to less specialized roles.
I think there’s a real tension between “it’s best for everyone to just work on their favourite thing” and “it’s best for everyone to go work at OpenAI on AI Policy,” and people make mistakes in both directions, both in their own careers and when giving advice to others. I personally believe that there are enough high-impact opportunities in climate change (esp. considering air quality) and gender equality (esp. in a global sense) for them to be great areas in which to build aptitudes and do the most good, but it’s definitely not a given.
I guess I weakly disagree: I think that motivation and already having roots in an issue really are a big part of personal fit—especially now that lots of “classic EA jobs” seem highly oversubscribed, even if the cause areas are more neglected than they should be.
Like to make this more concrete, if your climate-change-motivated young EA was like ‘well, now that I’ve learnt about AI risk, I guess I should pursue that career, ?’, but they don’t feel excited about it. Even if they have the innate ability to excel in AI safety, they will still have to outcompete people who have already built up expertize there, many of whom will find it easier to motivate themselves to work hard because they are interested in AI.
(On the object level, I assume that many roles in climate change and gender equality stuff are in fact more impactful than many roles in more canonical EA cause areas).
See Holden Karnofsky’s aptitudes-based perspective.
I definitely agree that “some people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit for”. But others could benefit from the opposite consideration, especially when taking into account moral and epistemic uncertainty about the relative value of different cause areas, and replaceability in areas where they would be limited to less specialized roles.
I think there’s a real tension between “it’s best for everyone to just work on their favourite thing” and “it’s best for everyone to go work
at OpenAIon AI Policy,” and people make mistakes in both directions, both in their own careers and when giving advice to others. I personally believe that there are enough high-impact opportunities in climate change (esp. considering air quality) and gender equality (esp. in a global sense) for them to be great areas in which to build aptitudes and do the most good, but it’s definitely not a given.To be clear, I don’t think this post says anything wrong, and I agree with it; although I don’t see the same recommendation often made to people who work on mechanistic interpretability or cause-prioritization because they already liked it. (It’s usually people criticizing the EA movement that say things like: “There are a lot of people in EA who just wanted a legitimate reason or excuse to sit around and talk about these big questions. But that made it feel like it’s a real job and they’re doing something good in the world instead of just sitting in a room and talking about philosophy.”)