This conclusion haunts me sometimes, although Iāve come to it from a different direction. I find it nontrivial to find fault with, given moral uncertainty. I havenāt come across the āworlds where we know moral realismā argument before. Upvoted. Here are two possible objections as replies to this comment:
Suppose for sake of argument that I have p < .1% that a being existed who had access to moral facts and could influence the world. Given this, the likelihood that one is confused on some basic question about morality would be higher.
Good pointāthis sort of worry seems sensible, for example if you have a zero credence in God then the argument just obviously wonāt go through.
I guess from my assessment of the philosophy of religion literature it doesnāt seem plausible to have a credence so low for theism that background uncertainties about being confused on some basic question of morality would be likely to make the argument all things considered unsuccessful.
Regardless, I think that the argument should still result in the possibility of theism having a larger influence on your decisions then the mere part of your probability space it takes up.
Reason could give us access to moral facts. There are plenty of informed writers who would claim to have solved the Is-Ought Problem. If one does not solve the Is-Ought problem, Iām not clear why this is better than moral subjectivism, though I didnāt read the link on subjectivism.
Iāve never heard a plausible account of someone solving the is-ought problem, Iād love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.
I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it canāt be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, Iād be interested if people here defend a sophisticated view of subjectivism that doesnāt have unpalatable results.
To link this to JPās other point, you might be right that subjectivism is implausible, but itās hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
Iām pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, itād be something like this: for those who have what Iād consider a ārationalā probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ārationalā probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agentās decision space should be governed by what reasons the agent would face if theism were true.
This conclusion haunts me sometimes, although Iāve come to it from a different direction. I find it nontrivial to find fault with, given moral uncertainty. I havenāt come across the āworlds where we know moral realismā argument before. Upvoted. Here are two possible objections as replies to this comment:
Suppose for sake of argument that I have p < .1% that a being existed who had access to moral facts and could influence the world. Given this, the likelihood that one is confused on some basic question about morality would be higher.
Good pointāthis sort of worry seems sensible, for example if you have a zero credence in God then the argument just obviously wonāt go through.
I guess from my assessment of the philosophy of religion literature it doesnāt seem plausible to have a credence so low for theism that background uncertainties about being confused on some basic question of morality would be likely to make the argument all things considered unsuccessful.
Regardless, I think that the argument should still result in the possibility of theism having a larger influence on your decisions then the mere part of your probability space it takes up.
Reason could give us access to moral facts. There are plenty of informed writers who would claim to have solved the Is-Ought Problem. If one does not solve the Is-Ought problem, Iām not clear why this is better than moral subjectivism, though I didnāt read the link on subjectivism.
edit: spelling
Iāve never heard a plausible account of someone solving the is-ought problem, Iād love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.
I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it canāt be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, Iād be interested if people here defend a sophisticated view of subjectivism that doesnāt have unpalatable results.
To link this to JPās other point, you might be right that subjectivism is implausible, but itās hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
Iām pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, itād be something like this: for those who have what Iād consider a ārationalā probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ārationalā probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agentās decision space should be governed by what reasons the agent would face if theism were true.
Upvote for starting with praise, and splitting out separate threads.