*nods* I think what I wrote there wasn’t very clear.
To restate my general point: I’m suggesting that your general frame contains a weird inversion. You’re supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others’ behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.
In the first one, you would be surprised to find out we’ve randomly been selected to have the right morality by evolution. In the second, it’s almost definitional that evolution has produced us to have the right morality. There’s still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.
Does the former seem like an accurate description of the way you’re proposing to think about morality?
Yep what you suggest I think isn’t far from the fact. Though note I’m open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.
I just think this question of ‘what grounds this moral experience’ is the right one to ask. On the way you’ve articulated it I just think your mere feelings about behaviours don’t amount to normative reasons for action, unless you can explain how these normative properties enter the picture.
Note that normative reasons are weird, they are not like anything descriptive, they have this weird property of what I sometimes call ‘binding oughtness’ that they rationally compel the agent to do particular things. It’s not obvious to me why your mere desires will throw up this special and weird property of binding oughtness.
*nods* I think what I wrote there wasn’t very clear.
To restate my general point: I’m suggesting that your general frame contains a weird inversion. You’re supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others’ behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.
In the first one, you would be surprised to find out we’ve randomly been selected to have the right morality by evolution. In the second, it’s almost definitional that evolution has produced us to have the right morality. There’s still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.
Does the former seem like an accurate description of the way you’re proposing to think about morality?
Yep what you suggest I think isn’t far from the fact. Though note I’m open to the possibility of normative realism being false, it could be that we are all fooled and that there are no true moral facts.
I just think this question of ‘what grounds this moral experience’ is the right one to ask. On the way you’ve articulated it I just think your mere feelings about behaviours don’t amount to normative reasons for action, unless you can explain how these normative properties enter the picture.
Note that normative reasons are weird, they are not like anything descriptive, they have this weird property of what I sometimes call ‘binding oughtness’ that they rationally compel the agent to do particular things. It’s not obvious to me why your mere desires will throw up this special and weird property of binding oughtness.
<unfinished>