However these just wouldn’t constitute normative reason for action and that’s just what you need for an action to be choice-worthy.
[...]
As I don’t think that mere desires create reasons for action I think we can ignore them unless they are actually prudential reasons.
I don’t know how to argue against this, you seem to be taking it as axiomatic. The one thing I can say is that it seems clearly obvious to me that your desires and goals can make some actions better to choose than others. It only becomes non-obvious if you expect there to be some external-to-you force telling you how to choose actions, but I see no reason to assume that. It really is fine if you’re actions aren’t guided by some overarching rule granted authority by virtue of being morality.
But I suspect this isn’t going to convince you. Can we simply assume that prudential reasons exist and figure out the implications?
The distinction between normative/prudential is one developed in the relevant literature, see this abstract for a paper by Roger Crisp to get a sense for it.
Thanks, I think I’ve got it now. (Also it seems to be in your appendix, not sure how I missed that before.)
The issues is that we’re trying to work out how to act with uncertainty about what sort of world we’re in?
I know, and I think in the very next paragraph I try to capture your view, and I’m fairly confident I got it right based on your comment.
However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do.
This seems tautological when you define morality as “binding oughtness” and compare against regular oughtness (which presumably applies to prudential reasons). But why stop there? Why not go to metamorality, or “binding meta-oughtness” that trumps “binding oughtness”? For example, “when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons”.
It is again tautologically true that a person who does what there is most metamoral reason to do could not have failed to do what there was most all things considered reason for them to do. It doesn’t sound as compelling, but I claim that is because we don’t have metamorality as an intuitive concept, whereas we do have morality as an intuitive concept.
I don’t know how to argue against this, you seem to be taking it as axiomatic.
I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that ‘my table has four legs’ won’t create normative reasons for action, neither will the descriptive fact that ‘Harry desires chocolate ice-cream’ create them either. It doesn’t seem obvious to me that the desire fact is much more likely to create normative reasons than the table fact. If we don’t think the table fact would then we shouldn’t think the desire fact would either.
This seems tautological when you define morality as “binding oughtness” and compare against regular oughtness (which presumably applies to prudential reasons).
Apologies for a lack of clarity, my use of ‘binding oughtness’ was meant to apply to both prudential and moral reasons for action, another way of describing the property that normative reasons seem to have is that they create this external rational tug on us to do a particular thing.
So I think both prudential and moral reasons create this sort of rational tug on us, and my further claim is that if both prudential and moral reasons exist and conflict in a given case then the moral reasons will override/outweigh the prudential reasons for the reasons given in your quotation.
Why not go to metamorality, or “binding meta-oughtness” that trumps “binding oughtness”? For example, “when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons”.
I worry that I’m not understanding the full force of your objection here. I have a very low credence that your proposed meta-normative rule would be true? What arguments are there for it?
There seems to be something that makes you think that moral reasons should trump prudential reasons. The overall thing I’m trying to do is narrow down on what that is. In most of my comments, I’ve thought I’ve identified it, and so I argued against it, but it seems I’m constantly wrong about that. So let me try and explicitly figure it out:
How much would you agree with each of these statements:
If there is a conflict between moral reasons and prudential reasons, you ought to do what the moral reasons say.
If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions one ought to take, then you ought to do what that process prescribes, regardless of what you desire.
If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions to take to maximize utility, then you ought to do what that process prescribes, regardless of what you desire.
If there is an external-to-you entity satisfying property X that prescribes actions you should take, then you ought to do what it says, regardless of what you desire. (For what value of X would you agree with this statement?)
I have a very low credence that your proposed meta-normative rule would be true?
I also have a very low credence of that meta-normative rule. I meant to contrast it to the meta-normative rule “binding oughtness trumps regular oughtness”, which I interpreted as “moral reasons trump prudential reasons”, but it seems I misunderstood what you meant there, since you mean “binding oughtness” to apply both to moral and prudential reasons, so ignore that argument.
I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that ‘my table has four legs’ won’t create normative reasons for action, neither will the descriptive fact that ‘Harry desires chocolate ice-cream’ create them either.
This makes me mildly worried that you aren’t able to imagine the worldview where prudential reasons exist. Though I have to admit I’m confused why under this view there are any normative reasons for action—surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion.
(Btw, random note, I suspect that Ben Pace above and I have very similar views, so you can probably take your understanding of his view and apply it to mine.)
There seems to be something that makes you think that moral reasons should trump prudential reasons.
The reason I have is in my original post. Namely I have a strong intuition that it would be very odd to say that someone who had done what there was most moral reason to do had failed to do what there was most ‘all things considered’ reason for them to do.
If my intuition here is right then moral reasons must always trump prudential reasons. Note I don’t have anything more to offer than this intuition, sorry if I made it seem like I did!
On your list of bullets:
1. 95%
2. 99%
3. 99% (Supposing for simplicity’s sake that I had a credence one in utilitarianism—which I don’t)
4. I don’t think I understand the set up of this question—it doesn’t seem to make a coherent sentence to replace X with a number in the way you have written it.
This makes me mildly worried that you aren’t able to imagine the worldview where prudential reasons exist.
I think I do have an intuitive understanding of what a prudential reason for action would be. Derek Parfit discusses the case of ‘future Tuesday indifference’ in On What Matters. Where prior to Tuesday a person is happy to sign up for any amount of pain on Tuesdays for the tiniest benefit before, even though it is really horrible when they get to Tuesdays. My view is that *if* prudential reasons exist, then avoiding future Tuesday indifference would be the most plausible sort of candidate for a prudential reason we might have.
Though I have to admit I’m confused why under this view there are any normative reasons for action—surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion.
So I think my view is similar to Parfit’s on this. If normative truths exist then they are ‘irreducibly normative’ they do not dissolve down to any descriptive statement. If someone has reason to do a descriptive statement X then this just means there is an irreducibly normative fact that makes this the case.
4. I don’t think I understand the set up of this question—it doesn’t seem to make a coherent sentence to replace X with a number in the way you have written it.
I did mean for you to replace X with a phrase, not a number.
If my intuition here is right then moral reasons must always trump prudential reasons. Note I don’t have anything more to offer than this intuition, sorry if I made it seem like I did!
Your intuition involves the complex phrase “moral reason” for which I could imagine multiple different interpretations. I’m trying to figure out which interpretation is correct.
Here are some different properties that “moral reason” could have:
1. It is independent of human desires and goals.
2. It trumps all other reasons for action.
3. It is an empirical fact about either the universe or math that can be derived by observation of the universe and pure reasoning.
My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I’m pushing on that.
A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but “irreducibly normative” sounds to me like it does not satisfy property 3.
Here are some models of how you might be thinking about moral reasons:
a) Moral reasons are defined as the reasons that satisfy property 1. If I think about those reasons, it seems to me that they also satisfy property 2.
b) Moral reasons are defined as the reasons that satisfy property 2. If I think about those reasons, it seems to me that they also satisfy property 1.
c) Moral reasons are defined as the reasons that satisfy both property 1 and property 2.
My response to a) and b) are of the form “That inference seems wrong to me and I want to delve further.”
My response to c) is “Define prudential reasons as the reasons that satisfy property 2 and not-property 1, then prudential reasons and moral reasons both trump all other reasons for action, which seems silly/strange.”
My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I’m pushing on that.
I do think they are correlated, because according to my intuitions both are true of moral reasons. However I wouldn’t want to argue that (2) is true because (1) is true. I’m not sure why (2) is true of moral reasons. I just have a strong intuition that it is and haven’t come across any defeaters for that intuition.
A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but “irreducibly normative” sounds to me like it does not satisfy property 3.
This seems false to me. It’s typically thought that an omniscient being (by definition) could know these non-natural irreducibly normative facts. All we’d need is some mechanism that connects humans with them. One mechanism as I discuss in my post is that God puts them in the brains of humans. We might wonder how God could know the non-natural facts, one explanation might be that God is the truthmaker for them, if he is then it seems plausible he would know them.
On your three options (a) seems closest to what I believe. Note my preferred definitions would be:
‘What I have most prudential reason to do is what benefits me most (benefits in an objective rather than subjective sense).’
‘What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe).’
To be clear it’s very plausible to me that what ‘benefits you most’ is not necessarily what you desire most as seen by Parfit’s discussion of future Tuesday indifference mentioned above. That’s why I use the objective caveat.
Okay, cool, I think I at least understand your position now. Not sure how to make progress though. I guess I’ll just try to clarify how I respond to imagining that I held the position you do.
From my perspective, the phrase “moral reason” has both the connotation that it is external to humans and that it trumps all other reasons, and that’s why the intuition is so strong. But if it is decomposed into those two properties, it no longer seems (to me) that they must go together. So from my perspective, when I imagine how I would justify the position you take, it seems to be a consequence of how we use language.
What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe)
My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don’t know how to identify the impartial reasons.
I’ve found the conversation productive, thanks for taking the time to discuss.
My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don’t know how to identify the impartial reasons.
Impartial reasons would be reasons that would ‘count’ even if we were some sort of floating consciousness observing the universe without any specific personal interests.
I probably don’t have any more intuitive explanations of impartial reasons than that, so sorry if it doesn’t convey my meaning!
My math-intuition says “that’s still not well-defined, such reasons may not exist”.
To which you might say “Well, there’s some probability they exist, and if they do exist, they trump everything else, so we should act as though they exist.”
My intuition says “But the rule of letting things that could exist be the dominant consideration seems really bad! I could invent all sorts of categories of things that could exist, that would trump everything I’ve considered so far. They’d all have some small probability of existing, and I could direct my actions any which way in this manner!” (This is what I was getting at with the “meta-oughtness” rule I was talking about earlier.)
To which you might say “But moral reasons aren’t some hypothesis I pulled out of the sky, they are commonly discussed and have been around in human discourse for millennia. I agree that we shouldn’t just invent new categories and put stock into them, but moral reasons hardly seem like a new category.”
And my response would be “I think moral reasons of the type you are talking about mostly came from the human tendency to anthropomorphize, combined with the fact that we needed some way to get humans to coordinate. Humans weren’t likely to just listen to rules that some other human made up, so the rules had to come from some external source. And in order to get good coordination, the rules needed to be followed, and so they had to have the property that they trumped any prudential reasons. This led us to develop the concept of rules that come from some external source and trump everything else, giving us our concept of moral reasons today. Given that our concept of “moral reasons” probably arose from this sort of process, I don’t think that “moral reasons” is a particularly likely thing to actually exist, and it seems wrong to base your actions primarily on moral reason. Also, as a corollary, even if there do exist reasons that trump all other reasons, I’m more likely to reject the intuition that it must come from some external source independent of humans, since I think that intuition was created by this non-truth-seeking process I just described.”
I don’t know how to argue against this, you seem to be taking it as axiomatic. The one thing I can say is that it seems clearly obvious to me that your desires and goals can make some actions better to choose than others. It only becomes non-obvious if you expect there to be some external-to-you force telling you how to choose actions, but I see no reason to assume that. It really is fine if you’re actions aren’t guided by some overarching rule granted authority by virtue of being morality.
But I suspect this isn’t going to convince you. Can we simply assume that prudential reasons exist and figure out the implications?
Thanks, I think I’ve got it now. (Also it seems to be in your appendix, not sure how I missed that before.)
I know, and I think in the very next paragraph I try to capture your view, and I’m fairly confident I got it right based on your comment.
This seems tautological when you define morality as “binding oughtness” and compare against regular oughtness (which presumably applies to prudential reasons). But why stop there? Why not go to metamorality, or “binding meta-oughtness” that trumps “binding oughtness”? For example, “when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons”.
It is again tautologically true that a person who does what there is most metamoral reason to do could not have failed to do what there was most all things considered reason for them to do. It doesn’t sound as compelling, but I claim that is because we don’t have metamorality as an intuitive concept, whereas we do have morality as an intuitive concept.
Thanks for the really thoughtful engagement.
I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that ‘my table has four legs’ won’t create normative reasons for action, neither will the descriptive fact that ‘Harry desires chocolate ice-cream’ create them either. It doesn’t seem obvious to me that the desire fact is much more likely to create normative reasons than the table fact. If we don’t think the table fact would then we shouldn’t think the desire fact would either.
Apologies for a lack of clarity, my use of ‘binding oughtness’ was meant to apply to both prudential and moral reasons for action, another way of describing the property that normative reasons seem to have is that they create this external rational tug on us to do a particular thing.
So I think both prudential and moral reasons create this sort of rational tug on us, and my further claim is that if both prudential and moral reasons exist and conflict in a given case then the moral reasons will override/outweigh the prudential reasons for the reasons given in your quotation.
I worry that I’m not understanding the full force of your objection here. I have a very low credence that your proposed meta-normative rule would be true? What arguments are there for it?
There seems to be something that makes you think that moral reasons should trump prudential reasons. The overall thing I’m trying to do is narrow down on what that is. In most of my comments, I’ve thought I’ve identified it, and so I argued against it, but it seems I’m constantly wrong about that. So let me try and explicitly figure it out:
How much would you agree with each of these statements:
If there is a conflict between moral reasons and prudential reasons, you ought to do what the moral reasons say.
If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions one ought to take, then you ought to do what that process prescribes, regardless of what you desire.
If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions to take to maximize utility, then you ought to do what that process prescribes, regardless of what you desire.
If there is an external-to-you entity satisfying property X that prescribes actions you should take, then you ought to do what it says, regardless of what you desire. (For what value of X would you agree with this statement?)
I also have a very low credence of that meta-normative rule. I meant to contrast it to the meta-normative rule “binding oughtness trumps regular oughtness”, which I interpreted as “moral reasons trump prudential reasons”, but it seems I misunderstood what you meant there, since you mean “binding oughtness” to apply both to moral and prudential reasons, so ignore that argument.
This makes me mildly worried that you aren’t able to imagine the worldview where prudential reasons exist. Though I have to admit I’m confused why under this view there are any normative reasons for action—surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion.
(Btw, random note, I suspect that Ben Pace above and I have very similar views, so you can probably take your understanding of his view and apply it to mine.)
The reason I have is in my original post. Namely I have a strong intuition that it would be very odd to say that someone who had done what there was most moral reason to do had failed to do what there was most ‘all things considered’ reason for them to do.
If my intuition here is right then moral reasons must always trump prudential reasons. Note I don’t have anything more to offer than this intuition, sorry if I made it seem like I did!
On your list of bullets:
1. 95%
2. 99%
3. 99% (Supposing for simplicity’s sake that I had a credence one in utilitarianism—which I don’t)
4. I don’t think I understand the set up of this question—it doesn’t seem to make a coherent sentence to replace X with a number in the way you have written it.
I think I do have an intuitive understanding of what a prudential reason for action would be. Derek Parfit discusses the case of ‘future Tuesday indifference’ in On What Matters. Where prior to Tuesday a person is happy to sign up for any amount of pain on Tuesdays for the tiniest benefit before, even though it is really horrible when they get to Tuesdays. My view is that *if* prudential reasons exist, then avoiding future Tuesday indifference would be the most plausible sort of candidate for a prudential reason we might have.
So I think my view is similar to Parfit’s on this. If normative truths exist then they are ‘irreducibly normative’ they do not dissolve down to any descriptive statement. If someone has reason to do a descriptive statement X then this just means there is an irreducibly normative fact that makes this the case.
I did mean for you to replace X with a phrase, not a number.
Your intuition involves the complex phrase “moral reason” for which I could imagine multiple different interpretations. I’m trying to figure out which interpretation is correct.
Here are some different properties that “moral reason” could have:
1. It is independent of human desires and goals.
2. It trumps all other reasons for action.
3. It is an empirical fact about either the universe or math that can be derived by observation of the universe and pure reasoning.
My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I’m pushing on that.
A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but “irreducibly normative” sounds to me like it does not satisfy property 3.
Here are some models of how you might be thinking about moral reasons:
a) Moral reasons are defined as the reasons that satisfy property 1. If I think about those reasons, it seems to me that they also satisfy property 2.
b) Moral reasons are defined as the reasons that satisfy property 2. If I think about those reasons, it seems to me that they also satisfy property 1.
c) Moral reasons are defined as the reasons that satisfy both property 1 and property 2.
My response to a) and b) are of the form “That inference seems wrong to me and I want to delve further.”
My response to c) is “Define prudential reasons as the reasons that satisfy property 2 and not-property 1, then prudential reasons and moral reasons both trump all other reasons for action, which seems silly/strange.”
I do think they are correlated, because according to my intuitions both are true of moral reasons. However I wouldn’t want to argue that (2) is true because (1) is true. I’m not sure why (2) is true of moral reasons. I just have a strong intuition that it is and haven’t come across any defeaters for that intuition.
This seems false to me. It’s typically thought that an omniscient being (by definition) could know these non-natural irreducibly normative facts. All we’d need is some mechanism that connects humans with them. One mechanism as I discuss in my post is that God puts them in the brains of humans. We might wonder how God could know the non-natural facts, one explanation might be that God is the truthmaker for them, if he is then it seems plausible he would know them.
On your three options (a) seems closest to what I believe. Note my preferred definitions would be:
‘What I have most prudential reason to do is what benefits me most (benefits in an objective rather than subjective sense).’
‘What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe).’
To be clear it’s very plausible to me that what ‘benefits you most’ is not necessarily what you desire most as seen by Parfit’s discussion of future Tuesday indifference mentioned above. That’s why I use the objective caveat.
Okay, cool, I think I at least understand your position now. Not sure how to make progress though. I guess I’ll just try to clarify how I respond to imagining that I held the position you do.
From my perspective, the phrase “moral reason” has both the connotation that it is external to humans and that it trumps all other reasons, and that’s why the intuition is so strong. But if it is decomposed into those two properties, it no longer seems (to me) that they must go together. So from my perspective, when I imagine how I would justify the position you take, it seems to be a consequence of how we use language.
My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don’t know how to identify the impartial reasons.
I’ve found the conversation productive, thanks for taking the time to discuss.
Impartial reasons would be reasons that would ‘count’ even if we were some sort of floating consciousness observing the universe without any specific personal interests.
I probably don’t have any more intuitive explanations of impartial reasons than that, so sorry if it doesn’t convey my meaning!
My math-intuition says “that’s still not well-defined, such reasons may not exist”.
To which you might say “Well, there’s some probability they exist, and if they do exist, they trump everything else, so we should act as though they exist.”
My intuition says “But the rule of letting things that could exist be the dominant consideration seems really bad! I could invent all sorts of categories of things that could exist, that would trump everything I’ve considered so far. They’d all have some small probability of existing, and I could direct my actions any which way in this manner!” (This is what I was getting at with the “meta-oughtness” rule I was talking about earlier.)
To which you might say “But moral reasons aren’t some hypothesis I pulled out of the sky, they are commonly discussed and have been around in human discourse for millennia. I agree that we shouldn’t just invent new categories and put stock into them, but moral reasons hardly seem like a new category.”
And my response would be “I think moral reasons of the type you are talking about mostly came from the human tendency to anthropomorphize, combined with the fact that we needed some way to get humans to coordinate. Humans weren’t likely to just listen to rules that some other human made up, so the rules had to come from some external source. And in order to get good coordination, the rules needed to be followed, and so they had to have the property that they trumped any prudential reasons. This led us to develop the concept of rules that come from some external source and trump everything else, giving us our concept of moral reasons today. Given that our concept of “moral reasons” probably arose from this sort of process, I don’t think that “moral reasons” is a particularly likely thing to actually exist, and it seems wrong to base your actions primarily on moral reason. Also, as a corollary, even if there do exist reasons that trump all other reasons, I’m more likely to reject the intuition that it must come from some external source independent of humans, since I think that intuition was created by this non-truth-seeking process I just described.”