This conclusion haunts me sometimes, although I’ve come to it from a different direction. I find it nontrivial to find fault with, given moral uncertainty. I haven’t come across the “worlds where we know moral realism” argument before. Upvoted. Here are two possible objections as replies to this comment:
Suppose for sake of argument that I have p < .1% that a being existed who had access to moral facts and could influence the world. Given this, the likelihood that one is confused on some basic question about morality would be higher.
Good point—this sort of worry seems sensible, for example if you have a zero credence in God then the argument just obviously won’t go through.
I guess from my assessment of the philosophy of religion literature it doesn’t seem plausible to have a credence so low for theism that background uncertainties about being confused on some basic question of morality would be likely to make the argument all things considered unsuccessful.
Regardless, I think that the argument should still result in the possibility of theism having a larger influence on your decisions then the mere part of your probability space it takes up.
Reason could give us access to moral facts. There are plenty of informed writers who would claim to have solved the Is-Ought Problem. If one does not solve the Is-Ought problem, I’m not clear why this is better than moral subjectivism, though I didn’t read the link on subjectivism.
I’ve never heard a plausible account of someone solving the is-ought problem, I’d love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.
I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it can’t be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, I’d be interested if people here defend a sophisticated view of subjectivism that doesn’t have unpalatable results.
To link this to JP’s other point, you might be right that subjectivism is implausible, but it’s hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
I’m pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, it’d be something like this: for those who have what I’d consider a ‘rational’ probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ‘rational’ probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent’s decision space should be governed by what reasons the agent would face if theism were true.
This conclusion haunts me sometimes, although I’ve come to it from a different direction. I find it nontrivial to find fault with, given moral uncertainty. I haven’t come across the “worlds where we know moral realism” argument before. Upvoted. Here are two possible objections as replies to this comment:
Suppose for sake of argument that I have p < .1% that a being existed who had access to moral facts and could influence the world. Given this, the likelihood that one is confused on some basic question about morality would be higher.
Good point—this sort of worry seems sensible, for example if you have a zero credence in God then the argument just obviously won’t go through.
I guess from my assessment of the philosophy of religion literature it doesn’t seem plausible to have a credence so low for theism that background uncertainties about being confused on some basic question of morality would be likely to make the argument all things considered unsuccessful.
Regardless, I think that the argument should still result in the possibility of theism having a larger influence on your decisions then the mere part of your probability space it takes up.
Reason could give us access to moral facts. There are plenty of informed writers who would claim to have solved the Is-Ought Problem. If one does not solve the Is-Ought problem, I’m not clear why this is better than moral subjectivism, though I didn’t read the link on subjectivism.
edit: spelling
I’ve never heard a plausible account of someone solving the is-ought problem, I’d love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.
I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it can’t be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, I’d be interested if people here defend a sophisticated view of subjectivism that doesn’t have unpalatable results.
To link this to JP’s other point, you might be right that subjectivism is implausible, but it’s hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
I’m pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, it’d be something like this: for those who have what I’d consider a ‘rational’ probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ‘rational’ probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent’s decision space should be governed by what reasons the agent would face if theism were true.
Upvote for starting with praise, and splitting out separate threads.