Thanks Ben! I’ll try and comment on your object level response in this comment and your meta level response in another.
Alas I’m not sure I properly track the full extent of your argument, but I’ll try and focus on the parts that are trackable to me. So apologies if I’m failing to understand the force of your argument because I am missing a crucial part.
I see the crux of our disagreement summed up here:
My model of the person who believes the OP wants to say
“Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they’re actually good?”
To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem—I have no further confusion.
I don’t see how ‘understanding that evolution created these cares and desires in me resolves the problem.’
Desires on their own are at most relevant for *prudential* reason for action, i.e. I want chocolate so I have a [prudential] reason to get chocolate. I attempt to deal (admittedly briefly) with prudential reasons in the appendix. Note that I don’t think that these sort of prudential reasons (if they exist) amount to moral reasons.
Unless a mere desire finds itself in a world where some broader moral theory is at play i.e. preference utilitarianism, which would itself need to enjoy an appropriate meta-ethical grounding/truthmaker (i.e. perhaps Parfit’s Non-Metaphysical Non-Naturalist Normative Cognitivism). Then the mere desire won’t create moral reasons for action. However if you do offer some moral theory then this just runs into the argument of my post, how would the human have access to the relevant moral theory?
In short, if you’re just saying: ‘actually what we talk about as moral reasons for action just boil down to prudential reasons for action as they are just desires I have’ then you’ll need to decide whether it’s plausible to think that a mere desire actually can create an objectively binding prudential reason for action.
If instead you’re saying ‘moral reasons are just what I plainly and simply comprehend, and they are primitive so can have no further explanation’ then I have the simple question about why you think they are primitive when it seems we can ask the seemingly legitimate question which you preempt of ‘but why is X actually good?’
However, I imagine that neither of my two summaries of your argument really are what you are driving for, so apologies if that’s the case.
*nods* I think what I wrote there wasn’t very clear.
To restate my general point: I’m suggesting that your general frame contains a weird inversion. You’re supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others’ behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.
In the first one, you would be surprised to find out we’ve randomly been selected to have the right morality by evolution. In the second, it’s almost definitional that evolution has produced us to have the right morality. There’s still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.
Does the former seem like an accurate description of the way you’re proposing to think about morality?
Yep what you suggest I think isn’t far from the fact. Though note I’m open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.
I just think this question of ‘what grounds this moral experience’ is the right one to ask. On the way you’ve articulated it I just think your mere feelings about behaviours don’t amount to normative reasons for action, unless you can explain how these normative properties enter the picture.
Note that normative reasons are weird, they are not like anything descriptive, they have this weird property of what I sometimes call ‘binding oughtness’ that they rationally compel the agent to do particular things. It’s not obvious to me why your mere desires will throw up this special and weird property of binding oughtness.
Thanks Ben! I’ll try and comment on your object level response in this comment and your meta level response in another.
Alas I’m not sure I properly track the full extent of your argument, but I’ll try and focus on the parts that are trackable to me. So apologies if I’m failing to understand the force of your argument because I am missing a crucial part.
I see the crux of our disagreement summed up here:
I don’t see how ‘understanding that evolution created these cares and desires in me resolves the problem.’
Desires on their own are at most relevant for *prudential* reason for action, i.e. I want chocolate so I have a [prudential] reason to get chocolate. I attempt to deal (admittedly briefly) with prudential reasons in the appendix. Note that I don’t think that these sort of prudential reasons (if they exist) amount to moral reasons.
Unless a mere desire finds itself in a world where some broader moral theory is at play i.e. preference utilitarianism, which would itself need to enjoy an appropriate meta-ethical grounding/truthmaker (i.e. perhaps Parfit’s Non-Metaphysical Non-Naturalist Normative Cognitivism). Then the mere desire won’t create moral reasons for action. However if you do offer some moral theory then this just runs into the argument of my post, how would the human have access to the relevant moral theory?
In short, if you’re just saying: ‘actually what we talk about as moral reasons for action just boil down to prudential reasons for action as they are just desires I have’ then you’ll need to decide whether it’s plausible to think that a mere desire actually can create an objectively binding prudential reason for action.
If instead you’re saying ‘moral reasons are just what I plainly and simply comprehend, and they are primitive so can have no further explanation’ then I have the simple question about why you think they are primitive when it seems we can ask the seemingly legitimate question which you preempt of ‘but why is X actually good?’
However, I imagine that neither of my two summaries of your argument really are what you are driving for, so apologies if that’s the case.
*nods* I think what I wrote there wasn’t very clear.
To restate my general point: I’m suggesting that your general frame contains a weird inversion. You’re supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others’ behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.
In the first one, you would be surprised to find out we’ve randomly been selected to have the right morality by evolution. In the second, it’s almost definitional that evolution has produced us to have the right morality. There’s still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.
Does the former seem like an accurate description of the way you’re proposing to think about morality?
Yep what you suggest I think isn’t far from the fact. Though note I’m open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.
I just think this question of ‘what grounds this moral experience’ is the right one to ask. On the way you’ve articulated it I just think your mere feelings about behaviours don’t amount to normative reasons for action, unless you can explain how these normative properties enter the picture.
Note that normative reasons are weird, they are not like anything descriptive, they have this weird property of what I sometimes call ‘binding oughtness’ that they rationally compel the agent to do particular things. It’s not obvious to me why your mere desires will throw up this special and weird property of binding oughtness.
<unfinished>