There’s a large difference (with many positions in between) between never outsourcing one’s epistemic work and accepting something like an equal weight view. There is, at this point, almost no consensus on this issue. One must engage with the philosophy here directly to actually do a proper and rational cause prioritization—if, at the very least, just about conscilliationism.
I think there’s an underlying assumption your making here that you can’t act rationally unless you can fully respond to any objection to your action, or provide some sort of perfect rationale for it. Otherwise, it’s at least possible to get things right just by actually giving the right weight to other people’s views, whether or not you can also explain philosophically why that is the right weight to give them. I think if you assume the picture of rationality on which you need this kind of full justification, pretty much nothing anyone does or feasibly could do is ever rational, and so the question of whether you can do rational cause prioritization without engaging with philosophy becomes uninteresting (answer no, but in practice you can’t do it even after engaging with philosophy, or really ever act rationally.)
On reflection, my actual view here is maybe that binary rational/​not rational classification isn’t very useful, rather things are more or less rational.
EDIT: I’d also say that something is going wrong if you think no one can ever update on testimony before deciding how much weight to give to other people’s opinions. As far as I remember, the literature about conciliationism and the equal weight view is about how you should update on learning propositions that are actually about other people’s opinions. But the following at least sometimes happens: someone tells you something that isn’t about people’s opinions at all, and then you get to add [edit: the proposition expressed by] that statement to your evidence, or do whatever it is your meant to do with testimony. The updating here isn’t on propositions about other people’s opinions at all. I don’t automatically see why expert testimony about philosophical issues couldn’t work like this, at least sometimes, though I guess I can imagine views on which it doesn’t (for example, maybe you only get to update on the proposition expressed and not on the fact that the expert expressed that proposition if the experts belief in the proposition amounts to knowledge, and no one has philosophical knowledge.)
But if there’s a single issue in the world we should engage with deeply instead of outsourcing, isn’t it this one? Isn’t this pretty much the most important question in the world from the perspective of an effective altruist?
It’s definitely good if people engage with it deeply if they make the results of their engagement public (I don’t think most people can outperform the community/​the wider non-EA world of experts on their own in terms of optimizing their own decisions.) But the question just asked whether it is possible to rationally set priorities without doing a lot of philosophical work yourself, not whether that was the best thing to do.
Sure, but it’s an extreme view that it’s never ok to outsource epistemic work to other people.
There’s a large difference (with many positions in between) between never outsourcing one’s epistemic work and accepting something like an equal weight view. There is, at this point, almost no consensus on this issue. One must engage with the philosophy here directly to actually do a proper and rational cause prioritization—if, at the very least, just about conscilliationism.
I think there’s an underlying assumption your making here that you can’t act rationally unless you can fully respond to any objection to your action, or provide some sort of perfect rationale for it. Otherwise, it’s at least possible to get things right just by actually giving the right weight to other people’s views, whether or not you can also explain philosophically why that is the right weight to give them. I think if you assume the picture of rationality on which you need this kind of full justification, pretty much nothing anyone does or feasibly could do is ever rational, and so the question of whether you can do rational cause prioritization without engaging with philosophy becomes uninteresting (answer no, but in practice you can’t do it even after engaging with philosophy, or really ever act rationally.)
On reflection, my actual view here is maybe that binary rational/​not rational classification isn’t very useful, rather things are more or less rational.
EDIT: I’d also say that something is going wrong if you think no one can ever update on testimony before deciding how much weight to give to other people’s opinions. As far as I remember, the literature about conciliationism and the equal weight view is about how you should update on learning propositions that are actually about other people’s opinions. But the following at least sometimes happens: someone tells you something that isn’t about people’s opinions at all, and then you get to add [edit: the proposition expressed by] that statement to your evidence, or do whatever it is your meant to do with testimony. The updating here isn’t on propositions about other people’s opinions at all. I don’t automatically see why expert testimony about philosophical issues couldn’t work like this, at least sometimes, though I guess I can imagine views on which it doesn’t (for example, maybe you only get to update on the proposition expressed and not on the fact that the expert expressed that proposition if the experts belief in the proposition amounts to knowledge, and no one has philosophical knowledge.)
But if there’s a single issue in the world we should engage with deeply instead of outsourcing, isn’t it this one? Isn’t this pretty much the most important question in the world from the perspective of an effective altruist?
It’s definitely good if people engage with it deeply if they make the results of their engagement public (I don’t think most people can outperform the community/​the wider non-EA world of experts on their own in terms of optimizing their own decisions.) But the question just asked whether it is possible to rationally set priorities without doing a lot of philosophical work yourself, not whether that was the best thing to do.