Ironically, having said this, I also think I disagree with you in sort-of the opposite direction on two specific points (though I think this is quite superficial and minor).
As such, I think it makes sense for EAs to engage with the various IBCs to decide on a preferred cause area, but after that to restrict further reading and engagement to within that preferred cause area (and not within other cause areas they have already ruled out).
I agree with the basic idea that itās probably best to start off thinking mostly about things like IBCs, and then on average gradually increase how much one focuses on prioritising and acting within a cause area. But it doesnāt seem ideal to me to see this as a totally one-directional progression from one stage to a very distinct stage.
I think even to begin with, it might often be good to already be spending some time on prioritising and acting within a cause area.
And more so, I think that, even once one has mostly settled on one cause area, it could occasionally be good to spend a little time thinking about IBCs again. E.g., letās say a person decides to focus on longtermism, and ends up in a role where they build great skills and networks related to lobbying. But these skills and networks are also useful in relation to lobbying for other issues as well, and the person is asked if they could take on a potentially very impactful role using the same skills and networks to reduce animal suffering. (Maybe thereās some specific reason why theyād be unusually well-positioned to do that.) I think itād probably then be worthwhile for that person to again think a bit about cause prioritisation.
I donāt think they should focus on the question āIs there a consideration I missed earlier that means near-term animal welfare is a more important cause than longtermism?ā I think it should be more like āDo/āShould I think that near-term animal welfare is close enough to as important a cause as longtermism that I should take this role, given considerations of comparative advantage, uncertainty, and the community taking a portfolio approach?ā
(But I think this is just a superficial disagreement, as I expect youād actually agree with what Iāve said, and that you might even have put in the sentence Iām disagreeing with partly to placate my own earlier comments :D)
For example, if one has read up on population ethics and is confident that they hold a person-affecting view, one can rule out reducing extinction risk at that point without having to engage with that area further (i.e. by understanding the overall probability of x-risk this century).
If thatās what you mean, then I think I basically agree with the point youāre making. But itās still possible for someone with a person-affecting view to prioritise reducing extinction risk (not just other existential risks), primarily because of the fact extinction would harm the people alive at the time of the extinction event. So it still might be worth that person at least spending a little bit of time checking whether the overall probability of extinction risk seems high enough for them to prioritise it on those grounds. (Personally, Iād guess extinction risk wouldnāt be a top priority on purely person-affecting grounds, but would still be decently important. I havenāt thought about it much, though.)
Ironically, having said this, I also think I disagree with you in sort-of the opposite direction on two specific points (though I think this is quite superficial and minor).
I agree with the basic idea that itās probably best to start off thinking mostly about things like IBCs, and then on average gradually increase how much one focuses on prioritising and acting within a cause area. But it doesnāt seem ideal to me to see this as a totally one-directional progression from one stage to a very distinct stage.
I think even to begin with, it might often be good to already be spending some time on prioritising and acting within a cause area.
And more so, I think that, even once one has mostly settled on one cause area, it could occasionally be good to spend a little time thinking about IBCs again. E.g., letās say a person decides to focus on longtermism, and ends up in a role where they build great skills and networks related to lobbying. But these skills and networks are also useful in relation to lobbying for other issues as well, and the person is asked if they could take on a potentially very impactful role using the same skills and networks to reduce animal suffering. (Maybe thereās some specific reason why theyād be unusually well-positioned to do that.) I think itād probably then be worthwhile for that person to again think a bit about cause prioritisation.
I donāt think they should focus on the question āIs there a consideration I missed earlier that means near-term animal welfare is a more important cause than longtermism?ā I think it should be more like āDo/āShould I think that near-term animal welfare is close enough to as important a cause as longtermism that I should take this role, given considerations of comparative advantage, uncertainty, and the community taking a portfolio approach?ā
(But I think this is just a superficial disagreement, as I expect youād actually agree with what Iāve said, and that you might even have put in the sentence Iām disagreeing with partly to placate my own earlier comments :D)
Iām guessing you mean āoverall probability of extinction riskā, rather than overall probability of x-risk as a whole? I say this because other types of existential riskāespecially unrecoverable dystopiasācould still be high priorities from some person-affecting perspectives.
If thatās what you mean, then I think I basically agree with the point youāre making. But itās still possible for someone with a person-affecting view to prioritise reducing extinction risk (not just other existential risks), primarily because of the fact extinction would harm the people alive at the time of the extinction event. So it still might be worth that person at least spending a little bit of time checking whether the overall probability of extinction risk seems high enough for them to prioritise it on those grounds. (Personally, Iād guess extinction risk wouldnāt be a top priority on purely person-affecting grounds, but would still be decently important. I havenāt thought about it much, though.)