One issue with moral uncertainty is that I think it means much less for moral antirealists. As a moral antirealist myself, I still use moral uncertainty, but in reference to views I personally am attracted to (based on argument, intuition, etc.) and that I think I could endorse with further reflection, but currently have a hard time deciding between. This way I can assign little weight to views I personally don’t find attractive, whereas someone who is a moral realist has to defend their intuitions (both make positive arguments for and address counterarguments) and refute intuitions they don’t have (but others do), a much higher bar, or else they’re just pretending their own intuitions track the moral truth while others’ do not. And most likely, they’ll still give undue weight to their own intuitions.
I don’t know what EA’s split is on moral realism/antirealism, though.
Of course, none of this says we shouldn’t try to cooperate with those who hold views we disagree with.
I’ve come to think that evidential cooperation in large worlds and, in different ways, preference utilitarianism pushes even antirealists toward relatively specific moral compromises that require an impartial empirical investigation to determine. (That may not apply to various antirealists that have rather easy-to-realize moral goals or one’s that others can’t help a lot with. Say, protecting your child from some dangers or being very happy. But it does to my drive to reduce suffering.)
One issue with moral uncertainty is that I think it means much less for moral antirealists. As a moral antirealist myself, I still use moral uncertainty, but in reference to views I personally am attracted to (based on argument, intuition, etc.) and that I think I could endorse with further reflection, but currently have a hard time deciding between. This way I can assign little weight to views I personally don’t find attractive, whereas someone who is a moral realist has to defend their intuitions (both make positive arguments for and address counterarguments) and refute intuitions they don’t have (but others do), a much higher bar, or else they’re just pretending their own intuitions track the moral truth while others’ do not. And most likely, they’ll still give undue weight to their own intuitions.
I don’t know what EA’s split is on moral realism/antirealism, though.
Of course, none of this says we shouldn’t try to cooperate with those who hold views we disagree with.
I’ve come to think that evidential cooperation in large worlds and, in different ways, preference utilitarianism pushes even antirealists toward relatively specific moral compromises that require an impartial empirical investigation to determine. (That may not apply to various antirealists that have rather easy-to-realize moral goals or one’s that others can’t help a lot with. Say, protecting your child from some dangers or being very happy. But it does to my drive to reduce suffering.)