This makes a lot of sense and thanks for sharing that post! It’s certainly true that my role is to help individuals and as such it’s important to recognize their individuality and other priorities.
I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines’ response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization.
Agree; moving into “EA-approved” direct work later in your career while initially doing skill- or network-building is also a good option for some. I would actually think that if someone can achieve a lot at the conventional career, e.g., achieving some local prominence (just as a goal in itself or as preparation to move into a more “directly EA role”), that’s great. My thinking here was especially influenced by an article about the neoliberalism community.
(Urgency of some problems, most prominently AI risk, might be indeed a decisive factor under some worldviews held in the community. I guess most people should plan their career as it most makes sense to them under their own worldviews, but I can imagine changing my mind here. I need to acknowledge that I think that short timelines and existential risk concerns are “psychoactive,” and people should be carefully exposed to them to avoid various failure modes.)
This makes a lot of sense and thanks for sharing that post! It’s certainly true that my role is to help individuals and as such it’s important to recognize their individuality and other priorities.
I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines’ response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization.
Agree; moving into “EA-approved” direct work later in your career while initially doing skill- or network-building is also a good option for some. I would actually think that if someone can achieve a lot at the conventional career, e.g., achieving some local prominence (just as a goal in itself or as preparation to move into a more “directly EA role”), that’s great. My thinking here was especially influenced by an article about the neoliberalism community.
(Urgency of some problems, most prominently AI risk, might be indeed a decisive factor under some worldviews held in the community. I guess most people should plan their career as it most makes sense to them under their own worldviews, but I can imagine changing my mind here. I need to acknowledge that I think that short timelines and existential risk concerns are “psychoactive,” and people should be carefully exposed to them to avoid various failure modes.)