I’ve never heard a plausible account of someone solving the is-ought problem, I’d love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.
I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it can’t be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, I’d be interested if people here defend a sophisticated view of subjectivism that doesn’t have unpalatable results.
To link this to JP’s other point, you might be right that subjectivism is implausible, but it’s hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
I’m pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, it’d be something like this: for those who have what I’d consider a ‘rational’ probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ‘rational’ probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent’s decision space should be governed by what reasons the agent would face if theism were true.
I’ve never heard a plausible account of someone solving the is-ought problem, I’d love to check it out if people here have one. To me it seems structurally to not be the sort of problem that can be overcome.
I find subjectivism a pretty implausible view of morality. It seems to me that morality cannot be mind-dependent and non-universal, it can’t be the sort of thing that if someone successfully brainwashes enough people then they can get morality to change. Again, I’d be interested if people here defend a sophisticated view of subjectivism that doesn’t have unpalatable results.
To link this to JP’s other point, you might be right that subjectivism is implausible, but it’s hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
I’m pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, it’d be something like this: for those who have what I’d consider a ‘rational’ probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ‘rational’ probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent’s decision space should be governed by what reasons the agent would face if theism were true.