Hi Misha. Thanks for your answer. I was wondering why you believe top EA cause areas to not be capable of utilizing people with a wide range of backgrounds and preferences. It seems to me like many of the top causes require various backgrounds. For example, reducing existential risk seems to require people in academia doing research, in policy enacting insights, in the media raising concerns, in tech building solutions, etc.
So let’s be more specific, current existential risk reduction focuses primarily on AI risk and biosecurity. Contributing to these fields requires quite a bit of specialization and high levels of interest in AI or biotechnology — this is the first filter. Let’s look at hypothetical positions DeepMind can hire for: they can absorb a lot of research scientists, some policy/strategy specialists, and a few general writers/communication specialists. DM probably doesn’t hire much if any people majoring in business and management, nursing, educations, criminal justice, anthropology, history, kinesiology, and arts — and these are all very popular undergraduate majors. There is a limited number of organizations, these organizations have their peculiarities and cultural issues — this is another filter.
Seconding Khorton’s reply, as a community builder you deal with individuals, who you can help select the path of most impact. It might be in an EA cause area or it might be not. The aforementioned filters might be prohibitive to some or might not pose a problem to others. Everyday longtermism is likely the option available to most. But in any case, you deal with individuals and individuals are peculiar :)
This makes a lot of sense and thanks for sharing that post! It’s certainly true that my role is to help individuals and as such it’s important to recognize their individuality and other priorities.
I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines’ response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization.
Agree; moving into “EA-approved” direct work later in your career while initially doing skill- or network-building is also a good option for some. I would actually think that if someone can achieve a lot at the conventional career, e.g., achieving some local prominence (just as a goal in itself or as preparation to move into a more “directly EA role”), that’s great. My thinking here was especially influenced by an article about the neoliberalism community.
(Urgency of some problems, most prominently AI risk, might be indeed a decisive factor under some worldviews held in the community. I guess most people should plan their career as it most makes sense to them under their own worldviews, but I can imagine changing my mind here. I need to acknowledge that I think that short timelines and existential risk concerns are “psychoactive,” and people should be carefully exposed to them to avoid various failure modes.)
Hi Misha. Thanks for your answer. I was wondering why you believe top EA cause areas to not be capable of utilizing people with a wide range of backgrounds and preferences. It seems to me like many of the top causes require various backgrounds. For example, reducing existential risk seems to require people in academia doing research, in policy enacting insights, in the media raising concerns, in tech building solutions, etc.
So let’s be more specific, current existential risk reduction focuses primarily on AI risk and biosecurity. Contributing to these fields requires quite a bit of specialization and high levels of interest in AI or biotechnology — this is the first filter. Let’s look at hypothetical positions DeepMind can hire for: they can absorb a lot of research scientists, some policy/strategy specialists, and a few general writers/communication specialists. DM probably doesn’t hire much if any people majoring in business and management, nursing, educations, criminal justice, anthropology, history, kinesiology, and arts — and these are all very popular undergraduate majors. There is a limited number of organizations, these organizations have their peculiarities and cultural issues — this is another filter.
Seconding Khorton’s reply, as a community builder you deal with individuals, who you can help select the path of most impact. It might be in an EA cause area or it might be not. The aforementioned filters might be prohibitive to some or might not pose a problem to others. Everyday longtermism is likely the option available to most. But in any case, you deal with individuals and individuals are peculiar :)
This makes a lot of sense and thanks for sharing that post! It’s certainly true that my role is to help individuals and as such it’s important to recognize their individuality and other priorities.
I suppose I also believe that one can contribute to these fields in the long-run by building aptitudes like Ines’ response discusses, but maybe these problems are urgent & require direct work soon, in which case I can see what you are saying about the high levels of specialization.
Agree; moving into “EA-approved” direct work later in your career while initially doing skill- or network-building is also a good option for some. I would actually think that if someone can achieve a lot at the conventional career, e.g., achieving some local prominence (just as a goal in itself or as preparation to move into a more “directly EA role”), that’s great. My thinking here was especially influenced by an article about the neoliberalism community.
(Urgency of some problems, most prominently AI risk, might be indeed a decisive factor under some worldviews held in the community. I guess most people should plan their career as it most makes sense to them under their own worldviews, but I can imagine changing my mind here. I need to acknowledge that I think that short timelines and existential risk concerns are “psychoactive,” and people should be carefully exposed to them to avoid various failure modes.)