Ironically, having said this, I also think I disagree with you in sort-of the opposite direction on two specific points (though I think this is quite superficial and minor).
As such, I think it makes sense for EAs to engage with the various IBCs to decide on a preferred cause area, but after that to restrict further reading and engagement to within that preferred cause area (and not within other cause areas they have already ruled out).
I agree with the basic idea that it’s probably best to start off thinking mostly about things like IBCs, and then on average gradually increase how much one focuses on prioritising and acting within a cause area. But it doesn’t seem ideal to me to see this as a totally one-directional progression from one stage to a very distinct stage.
I think even to begin with, it might often be good to already be spending some time on prioritising and acting within a cause area.
And more so, I think that, even once one has mostly settled on one cause area, it could occasionally be good to spend a little time thinking about IBCs again. E.g., let’s say a person decides to focus on longtermism, and ends up in a role where they build great skills and networks related to lobbying. But these skills and networks are also useful in relation to lobbying for other issues as well, and the person is asked if they could take on a potentially very impactful role using the same skills and networks to reduce animal suffering. (Maybe there’s some specific reason why they’d be unusually well-positioned to do that.) I think it’d probably then be worthwhile for that person to again think a bit about cause prioritisation.
I don’t think they should focus on the question “Is there a consideration I missed earlier that means near-term animal welfare is a more important cause than longtermism?” I think it should be more like “Do/Should I think that near-term animal welfare is close enough to as important a cause as longtermism that I should take this role, given considerations of comparative advantage, uncertainty, and the community taking a portfolio approach?”
(But I think this is just a superficial disagreement, as I expect you’d actually agree with what I’ve said, and that you might even have put in the sentence I’m disagreeing with partly to placate my own earlier comments :D)
For example, if one has read up on population ethics and is confident that they hold a person-affecting view, one can rule out reducing extinction risk at that point without having to engage with that area further (i.e. by understanding the overall probability of x-risk this century).
If that’s what you mean, then I think I basically agree with the point you’re making. But it’s still possible for someone with a person-affecting view to prioritise reducing extinction risk (not just other existential risks), primarily because of the fact extinction would harm the people alive at the time of the extinction event. So it still might be worth that person at least spending a little bit of time checking whether the overall probability of extinction risk seems high enough for them to prioritise it on those grounds. (Personally, I’d guess extinction risk wouldn’t be a top priority on purely person-affecting grounds, but would still be decently important. I haven’t thought about it much, though.)
Ironically, having said this, I also think I disagree with you in sort-of the opposite direction on two specific points (though I think this is quite superficial and minor).
I agree with the basic idea that it’s probably best to start off thinking mostly about things like IBCs, and then on average gradually increase how much one focuses on prioritising and acting within a cause area. But it doesn’t seem ideal to me to see this as a totally one-directional progression from one stage to a very distinct stage.
I think even to begin with, it might often be good to already be spending some time on prioritising and acting within a cause area.
And more so, I think that, even once one has mostly settled on one cause area, it could occasionally be good to spend a little time thinking about IBCs again. E.g., let’s say a person decides to focus on longtermism, and ends up in a role where they build great skills and networks related to lobbying. But these skills and networks are also useful in relation to lobbying for other issues as well, and the person is asked if they could take on a potentially very impactful role using the same skills and networks to reduce animal suffering. (Maybe there’s some specific reason why they’d be unusually well-positioned to do that.) I think it’d probably then be worthwhile for that person to again think a bit about cause prioritisation.
I don’t think they should focus on the question “Is there a consideration I missed earlier that means near-term animal welfare is a more important cause than longtermism?” I think it should be more like “Do/Should I think that near-term animal welfare is close enough to as important a cause as longtermism that I should take this role, given considerations of comparative advantage, uncertainty, and the community taking a portfolio approach?”
(But I think this is just a superficial disagreement, as I expect you’d actually agree with what I’ve said, and that you might even have put in the sentence I’m disagreeing with partly to placate my own earlier comments :D)
I’m guessing you mean “overall probability of extinction risk”, rather than overall probability of x-risk as a whole? I say this because other types of existential risk—especially unrecoverable dystopias—could still be high priorities from some person-affecting perspectives.
If that’s what you mean, then I think I basically agree with the point you’re making. But it’s still possible for someone with a person-affecting view to prioritise reducing extinction risk (not just other existential risks), primarily because of the fact extinction would harm the people alive at the time of the extinction event. So it still might be worth that person at least spending a little bit of time checking whether the overall probability of extinction risk seems high enough for them to prioritise it on those grounds. (Personally, I’d guess extinction risk wouldn’t be a top priority on purely person-affecting grounds, but would still be decently important. I haven’t thought about it much, though.)