I definitely agree that âsome people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit forâ. But others could benefit from the opposite consideration, especially when taking into account moral and epistemic uncertainty about the relative value of different cause areas, and replaceability in areas where they would be limited to less specialized roles.
I think thereâs a real tension between âitâs best for everyone to just work on their favourite thingâ and âitâs best for everyone to go work at OpenAI on AI Policy,â and people make mistakes in both directions, both in their own careers and when giving advice to others. I personally believe that there are enough high-impact opportunities in climate change (esp. considering air quality) and gender equality (esp. in a global sense) for them to be great areas in which to build aptitudes and do the most good, but itâs definitely not a given.
See Holden Karnofskyâs aptitudes-based perspective.
I definitely agree that âsome people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit forâ. But others could benefit from the opposite consideration, especially when taking into account moral and epistemic uncertainty about the relative value of different cause areas, and replaceability in areas where they would be limited to less specialized roles.
I think thereâs a real tension between âitâs best for everyone to just work on their favourite thingâ and âitâs best for everyone to go work
at OpenAIon AI Policy,â and people make mistakes in both directions, both in their own careers and when giving advice to others. I personally believe that there are enough high-impact opportunities in climate change (esp. considering air quality) and gender equality (esp. in a global sense) for them to be great areas in which to build aptitudes and do the most good, but itâs definitely not a given.To be clear, I donât think this post says anything wrong, and I agree with it; although I donât see the same recommendation often made to people who work on mechanistic interpretability or cause-prioritization because they already liked it. (Itâs usually people criticizing the EA movement that say things like: âThere are a lot of people in EA who just wanted a legitimate reason or excuse to sit around and talk about these big questions. But that made it feel like itâs a real job and theyâre doing something good in the world instead of just sitting in a room and talking about philosophy.â)