I think there’s an underlying assumption your making here that you can’t act rationally unless you can fully respond to any objection to your action, or provide some sort of perfect rationale for it. Otherwise, it’s at least possible to get things right just by actually giving the right weight to other people’s views, whether or not you can also explain philosophically why that is the right weight to give them. I think if you assume the picture of rationality on which you need this kind of full justification, pretty much nothing anyone does or feasibly could do is ever rational, and so the question of whether you can do rational cause prioritization without engaging with philosophy becomes uninteresting (answer no, but in practice you can’t do it even after engaging with philosophy, or really ever act rationally.)
On reflection, my actual view here is maybe that binary rational/​not rational classification isn’t very useful, rather things are more or less rational.
EDIT: I’d also say that something is going wrong if you think no one can ever update on testimony before deciding how much weight to give to other people’s opinions. As far as I remember, the literature about conciliationism and the equal weight view is about how you should update on learning propositions that are actually about other people’s opinions. But the following at least sometimes happens: someone tells you something that isn’t about people’s opinions at all, and then you get to add [edit: the proposition expressed by] that statement to your evidence, or do whatever it is your meant to do with testimony. The updating here isn’t on propositions about other people’s opinions at all. I don’t automatically see why expert testimony about philosophical issues couldn’t work like this, at least sometimes, though I guess I can imagine views on which it doesn’t (for example, maybe you only get to update on the proposition expressed and not on the fact that the expert expressed that proposition if the experts belief in the proposition amounts to knowledge, and no one has philosophical knowledge.)
I think there’s an underlying assumption your making here that you can’t act rationally unless you can fully respond to any objection to your action, or provide some sort of perfect rationale for it. Otherwise, it’s at least possible to get things right just by actually giving the right weight to other people’s views, whether or not you can also explain philosophically why that is the right weight to give them. I think if you assume the picture of rationality on which you need this kind of full justification, pretty much nothing anyone does or feasibly could do is ever rational, and so the question of whether you can do rational cause prioritization without engaging with philosophy becomes uninteresting (answer no, but in practice you can’t do it even after engaging with philosophy, or really ever act rationally.)
On reflection, my actual view here is maybe that binary rational/​not rational classification isn’t very useful, rather things are more or less rational.
EDIT: I’d also say that something is going wrong if you think no one can ever update on testimony before deciding how much weight to give to other people’s opinions. As far as I remember, the literature about conciliationism and the equal weight view is about how you should update on learning propositions that are actually about other people’s opinions. But the following at least sometimes happens: someone tells you something that isn’t about people’s opinions at all, and then you get to add [edit: the proposition expressed by] that statement to your evidence, or do whatever it is your meant to do with testimony. The updating here isn’t on propositions about other people’s opinions at all. I don’t automatically see why expert testimony about philosophical issues couldn’t work like this, at least sometimes, though I guess I can imagine views on which it doesn’t (for example, maybe you only get to update on the proposition expressed and not on the fact that the expert expressed that proposition if the experts belief in the proposition amounts to knowledge, and no one has philosophical knowledge.)