To link this to JP’s other point, you might be right that subjectivism is implausible, but it’s hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
I’m pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, it’d be something like this: for those who have what I’d consider a ‘rational’ probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ‘rational’ probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent’s decision space should be governed by what reasons the agent would face if theism were true.
To link this to JP’s other point, you might be right that subjectivism is implausible, but it’s hard to tell how low a credence to give it.
If your credence in subjectivism + model uncertainty (+ I think also constructivism + quasi-realism + maybe others?) is sufficiently high relative to your credence in God, then this weakens your argument (although it still seems plausible to me that theistic moralities end up with a large slice of the pie).
I’m pretty uncertain about my credence in each of those views though.
This is a really nice way of formulating the critique of the argument, thanks Max. It makes me update considerably away from the belief stated in the title of my post.
To capture my updated view, it’d be something like this: for those who have what I’d consider a ‘rational’ probability for theism (i.e. between 1% and 99% given my last couple of years of doing philosophy of religion) and a ‘rational’ probability for some mind-dependent normative realist ethics (i.e. between 0.1% and 5% - less confident here) then the result of my argument is that a substantial proportion of an agent’s decision space should be governed by what reasons the agent would face if theism were true.