To what extent do you expect to ‘accept’ people’s preferred cause areas versus introduce people to ideas that may help them make the most informed decision on what cause area they should focus on.
For example, if someone comes to you and says “I want to work on global health” will you say “that’s great here’s our advice on that cause area” or might you say “that’s great although just to check have you engaged with the EA literature on cause areas and understand why some people don’t prioritise global health e.g. due to cluelessness on the expected impacts of interventions in global health”. I chose global health here for an example but this can obviously apply to all cause areas. To clarify I’m not certain which of accepting vs educating is the best approach.
Similarly how will you deal with people who don’t really have a clue what cause area they are most interested in?
In general, I don’t think there’s one best approach. Where we want to be on the education \ acceptance trade-off depends on the circumstances. It might be easiest to go over examples (including ones you gave) and give my thoughts on how they’re different.
First, I think the simplest case is the one you ended with. If someone doesn’t know what cause area they’re interested in and wants our help with cause prioritization, I think there aren’t many tradeoffs here—we’d strongly recommend relevant materials to allow them to make intelligent decisions on how to maximize their impact.
Second, I want to refer to cases where someone is interested in cause areas that don’t seem plausibly compatible with EA, broadly defined. In this case we believe in tending towards the “educate” side of the spectrum (as you call it), though in our writing we still aim not to make it a prerequisite for engaging with our recommendations and advice. That being said, these nuances may be irrelevant in the short-term future (at least months, possibly more), as due to prioritization of content, we probably won’t have any content for cause areas that are not firmly within EA.
In the case where the deliberation is between EA cause areas (as is the case in your example) there are some nuances that will probably be more evident in our content even from day one (though may change over time). Our recommended process for choosing a career will involve engaging with important cause prioritization questions, including who deserves moral concern (e.g. those far from us geographically, non-human animals, and those in the long term future). Within more specific content, e.g. specific career path profiles, we intend to refer to these considerations but not try and force people to engage with them. If I take your global health example, in a career path profile about development economics we would highlight that one of the disadvantages of this path is that it is mainly promising from a near-term perspective and unclear from a long-term perspective, with links to relevant materials. That being said, someone who has decided they’re interested in global health, doesn’t follow our recommended process for choosing a career, and navigates directly to global health-related careers will primarily be reading content related to this cause area (and not material on whether this is the top cause area). Our approach to 1:1 consultation is similar—our top recommendation is for people to engage with relevant materials, but we are willing to assist people with more narrow questions if this is what they’re interested in (though, much like the non-EA case, we expect to be in over-demand in the foreseeable future, and may in practice prioritize those who are pursuing all avenues to increasing their impact).
Hope this provides at least some clarity, and let me know if you have other questions.
To what extent do you expect to ‘accept’ people’s preferred cause areas versus introduce people to ideas that may help them make the most informed decision on what cause area they should focus on.
For example, if someone comes to you and says “I want to work on global health” will you say “that’s great here’s our advice on that cause area” or might you say “that’s great although just to check have you engaged with the EA literature on cause areas and understand why some people don’t prioritise global health e.g. due to cluelessness on the expected impacts of interventions in global health”. I chose global health here for an example but this can obviously apply to all cause areas. To clarify I’m not certain which of accepting vs educating is the best approach.
Similarly how will you deal with people who don’t really have a clue what cause area they are most interested in?
Hi Jack, thanks for the great question.
In general, I don’t think there’s one best approach. Where we want to be on the education \ acceptance trade-off depends on the circumstances. It might be easiest to go over examples (including ones you gave) and give my thoughts on how they’re different.
First, I think the simplest case is the one you ended with. If someone doesn’t know what cause area they’re interested in and wants our help with cause prioritization, I think there aren’t many tradeoffs here—we’d strongly recommend relevant materials to allow them to make intelligent decisions on how to maximize their impact.
Second, I want to refer to cases where someone is interested in cause areas that don’t seem plausibly compatible with EA, broadly defined. In this case we believe in tending towards the “educate” side of the spectrum (as you call it), though in our writing we still aim not to make it a prerequisite for engaging with our recommendations and advice. That being said, these nuances may be irrelevant in the short-term future (at least months, possibly more), as due to prioritization of content, we probably won’t have any content for cause areas that are not firmly within EA.
In the case where the deliberation is between EA cause areas (as is the case in your example) there are some nuances that will probably be more evident in our content even from day one (though may change over time). Our recommended process for choosing a career will involve engaging with important cause prioritization questions, including who deserves moral concern (e.g. those far from us geographically, non-human animals, and those in the long term future). Within more specific content, e.g. specific career path profiles, we intend to refer to these considerations but not try and force people to engage with them. If I take your global health example, in a career path profile about development economics we would highlight that one of the disadvantages of this path is that it is mainly promising from a near-term perspective and unclear from a long-term perspective, with links to relevant materials. That being said, someone who has decided they’re interested in global health, doesn’t follow our recommended process for choosing a career, and navigates directly to global health-related careers will primarily be reading content related to this cause area (and not material on whether this is the top cause area). Our approach to 1:1 consultation is similar—our top recommendation is for people to engage with relevant materials, but we are willing to assist people with more narrow questions if this is what they’re interested in (though, much like the non-EA case, we expect to be in over-demand in the foreseeable future, and may in practice prioritize those who are pursuing all avenues to increasing their impact).
Hope this provides at least some clarity, and let me know if you have other questions.
Thanks that all makes sense and I agree that a one size fits all approach is unlikely to be appropriate.