Why call it morality at all if you are just talking about subjective preferences.
Yeah, that is my current perspective, and I’ve found no meaningful distinction that would allow me to distinguish moral from amoral preferences. What you call intersubjective is something that I consider a strategic concern that follows from wanting to realize my moral preferences. I’ve wondered whether I should count the implications of these strategic concerns into my moral category, but that seemed less parsimonious to me. I’m wary of subjective things and want to keep them contained the same way I want to keep some ugly copypasted code contained, black-boxed in a separate module, so it has no effects on the rest of the code base.
There’s a distinction between me wanting X, and morality suggesting, requiring, or demanding X.
I like to use two different words here to make the distinction clearer, moral preferences and moral goals. In both cases you can talk about instrumental and terminal moral preferences/goals. This is how I prefer to distinguish goals from preferences (copypaste from my thesis):
To aid comprehension, however, I will make an artificial distinction of moral preferences and moral goals that becomes meaningful in the case of agent-relative preferences: two people with a personal profit motive share the same preference for profit but their goals are different ones since they are different agents. If they also share the agent-neutral preference for minimizing global suffering, then they also share the same goal of reducing it.
I’ll assume that in this case we’re talking about agent-neutral preferences, so I’ll just use goal here for clarity. If someone has the personal goal of wanting to get good at playing the theremin, then on Tuesday morning, when they’re still groggy from a night of coding and all out of coffee and Modafinil, they’ll want to stay in bed and very much not want to practice the theremin on one level but still want to practice the theremin on another level, a system 2 level, because they know that to become good at it, they’ll need to practice regularly. Here having practiced is an instrumental goal to the (perhaps) terminal goal of becoming good at playing the theremin. You could say that their terminal goal requires or demands them to practice even though they don’t want to. When I had to file and send out donation certificates to donors I was feeling the same way.
I can see Lewis, Chalmers and Muelhauser blushing at my failure.
Aw, hugs!
For one thing you are not aware of all moral preferences that, on reflection you would agree with.
Oops, yes. I should’ve specified that.
For another, you could bias your dedication intensity.
If I understand you correctly, then that is what I tried to capture by “optimally.”
You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.
This seems to me like a combination of the two limitations above. A person can decide to not act on moral preferences that they continue to entertain for strategic purposes, e.g., to more effectively cooperate with others on realizing another moral goal. When a person rejects, i.e. no longer entertains a moral preference (assuming such a thing can be willed), and optimally furthers other moral goals of theirs, then I’d say they are doing what is moral (to them).
To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences.
Cuddlepiles? Count me in! But these preferences also include “the most minds having the time of their lives.” I would put all these preferences on the same qualitative footing, but let’s say you care comparatively little about the whiteboards and a lot about the happy minds and the ecstatic dance. Let’s further assume that a lot of people out there are fairly neutral about the dance (at least so long as they don’t have to dance) but excited about the happy minds. When you decide to put the realization of the dance goal on the back-burner and concentrate on maximizing those happy minds, you’ll have an easy time finding a lot of cooperation partners, and together you actually have a bit of a shot of nudging the world in that direction. If you concentrated on the dance goal, however, you’d find much fewer partners and make much less progress, incurring a large opportunity cost in goal realization. Hence pursuing this goal would be less moral by (lacking) dint of its intersubjective tractability.
So yes, to recap, according to my understanding, everyone has, from your perspective, the moral obligation to satisfy your various goals. However, other people disagree, particularly on agent-relative goals but also at times on agent-neutral ones. Just as you require resources to realize your goals, you often also require cooperation from others, and costs and lacking tractability make some goals more and others less costly to attain. Hence, then moral thing to do is to minimize one’s opportunity cost in goal realization.
Please tell me if I’m going wrong somewhere. Thanks!
I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply “in a not extremely hard to coordinate way”?)
At large I’d say that you are talking about how to be an agenty Moral agent. I’m not sure morality requires being agenty, but it certainly benefits from it.
Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, but more to some that actually don’t have that great of a standing, and less to others which normally do the heavy lifting (don’t you love when philosophers talk about this “heavy lifting”?). So doing it non-optimally.
Thanks for bridging the gap!
Yeah, that is my current perspective, and I’ve found no meaningful distinction that would allow me to distinguish moral from amoral preferences. What you call intersubjective is something that I consider a strategic concern that follows from wanting to realize my moral preferences. I’ve wondered whether I should count the implications of these strategic concerns into my moral category, but that seemed less parsimonious to me. I’m wary of subjective things and want to keep them contained the same way I want to keep some ugly copypasted code contained, black-boxed in a separate module, so it has no effects on the rest of the code base.
I like to use two different words here to make the distinction clearer, moral preferences and moral goals. In both cases you can talk about instrumental and terminal moral preferences/goals. This is how I prefer to distinguish goals from preferences (copypaste from my thesis):
To aid comprehension, however, I will make an artificial distinction of moral preferences and moral goals that becomes meaningful in the case of agent-relative preferences: two people with a personal profit motive share the same preference for profit but their goals are different ones since they are different agents. If they also share the agent-neutral preference for minimizing global suffering, then they also share the same goal of reducing it.
I’ll assume that in this case we’re talking about agent-neutral preferences, so I’ll just use goal here for clarity. If someone has the personal goal of wanting to get good at playing the theremin, then on Tuesday morning, when they’re still groggy from a night of coding and all out of coffee and Modafinil, they’ll want to stay in bed and very much not want to practice the theremin on one level but still want to practice the theremin on another level, a system 2 level, because they know that to become good at it, they’ll need to practice regularly. Here having practiced is an instrumental goal to the (perhaps) terminal goal of becoming good at playing the theremin. You could say that their terminal goal requires or demands them to practice even though they don’t want to. When I had to file and send out donation certificates to donors I was feeling the same way.
Aw, hugs!
Oops, yes. I should’ve specified that.
If I understand you correctly, then that is what I tried to capture by “optimally.”
This seems to me like a combination of the two limitations above. A person can decide to not act on moral preferences that they continue to entertain for strategic purposes, e.g., to more effectively cooperate with others on realizing another moral goal. When a person rejects, i.e. no longer entertains a moral preference (assuming such a thing can be willed), and optimally furthers other moral goals of theirs, then I’d say they are doing what is moral (to them).
Cuddlepiles? Count me in! But these preferences also include “the most minds having the time of their lives.” I would put all these preferences on the same qualitative footing, but let’s say you care comparatively little about the whiteboards and a lot about the happy minds and the ecstatic dance. Let’s further assume that a lot of people out there are fairly neutral about the dance (at least so long as they don’t have to dance) but excited about the happy minds. When you decide to put the realization of the dance goal on the back-burner and concentrate on maximizing those happy minds, you’ll have an easy time finding a lot of cooperation partners, and together you actually have a bit of a shot of nudging the world in that direction. If you concentrated on the dance goal, however, you’d find much fewer partners and make much less progress, incurring a large opportunity cost in goal realization. Hence pursuing this goal would be less moral by (lacking) dint of its intersubjective tractability.
So yes, to recap, according to my understanding, everyone has, from your perspective, the moral obligation to satisfy your various goals. However, other people disagree, particularly on agent-relative goals but also at times on agent-neutral ones. Just as you require resources to realize your goals, you often also require cooperation from others, and costs and lacking tractability make some goals more and others less costly to attain. Hence, then moral thing to do is to minimize one’s opportunity cost in goal realization.
Please tell me if I’m going wrong somewhere. Thanks!
I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply “in a not extremely hard to coordinate way”?)
At large I’d say that you are talking about how to be an agenty Moral agent. I’m not sure morality requires being agenty, but it certainly benefits from it.
Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, but more to some that actually don’t have that great of a standing, and less to others which normally do the heavy lifting (don’t you love when philosophers talk about this “heavy lifting”?). So doing it non-optimally.