It is possible to rationally prioritise between causes without engaging deeply on philosophical issues
At least on the margin, sometimes. I don’t think “children dying is bad” requires deeply engaging philosophical issues, and once you have the simple goal “fewer children die” you can do some cause prioritization.
(In contrast, I think a straightforward strategy of figure out the nature of goodness and then pick the causes requires ~solving philosophy.)
Great post. I explored the same in rougher format in draft amnesty week last year: https://forum.effectivealtruism.org/posts/nQYW5nq9iKpCKnYBj/question-how-to-form-beliefs-about-effectiveness-in-high
We came up with basically the same toy model!
This concern seems to bubble up from time to time in EA, but seems—as you say—rarely addressed head on. I wonder how often we privately believe the cause we’re championing can’t possibly be as good as our naive expected value estimate, but is still probably “very good”.