It is possible to rationally prioritise between causes without engaging deeply on philosophical issues
Underlying philosophical issues have clear implications for what you should prioritize, so I’m not really sure how you can rationally prioritize between causes without engaging with these issues.
I’m also not really sure how to defer on these issues when there are lots of highly intelligent, altruistically-minded people who disagree with each other. These disagreements often arise due to value judgements, and I don’t think you can defer on your underlying values.
Underlying philosophical issues have clear implications for what you should prioritize, so I’m not really sure how you can rationally prioritize between causes without engaging with these issues.
I’m also not really sure how to defer on these issues when there are lots of highly intelligent, altruistically-minded people who disagree with each other. These disagreements often arise due to value judgements, and I don’t think you can defer on your underlying values.
I have written about how I think EAs should understand certain considerations to aid in cause prioritization, and that I want the EA community to make it easier for people to do so: Important Between-Cause Considerations: things every EA should know about — EA Forum
I also produced a cause prioritization flowchart to make this easier, as linked to in another comment: A guided cause prioritisation flowchart — EA Forum