am I right to infer that you’re arguing from a moral realist perspective
If you’re not arguing from a moral realist perspective, wouldn’t {move the universe into a state I prefer} and {act morally} necessarily be the same because you could define your own moral values to match your preferences?
If morality is subjective, the whole distinction between morals and preferences breaks down.
Telofy: Trying to figure out the direction of the inferential gap here. Let me try to explain, I don’t promise to succeed.
Aggregative consequentialist utilitarianism holds that people in general should value most minds having the times of their lives, where “in general” here actually translated into a “should” operator. A moral operator. There’s a distinction between me wanting X, and morality suggesting, requiring, or demanding X. Even if X is the same, different things can hold a relation to it.
At the moment I both hold a personal preference relation to you having a great time as I do a moral one. But if the moral one was dropped (as Williams makes me drop sevral of my moral reasons) I’d still have the personal one, and it supersedes the moral considerations that could arise otherwise.
Moral Uncertainty: To confess, that was my bad not disentangling uncertainty about my preferences that happen to be moral, my preferences that happen to coincide with preferences that are moral, and the preferences that morality would, say, require me to have. That was bad philosophy and on my part and I can see Lewis, Chalmers and Muelhauser blushing at my failure.
I meant uncertainty I have as an empirical subject in determining which of the reasons for argument I find are moral reasons or not, and within that which belong to which moral perspective. For instance I assign high credence that breaking a promise is bad from a Kantian standpoint, times a low credence that Kant was right about what is right. So not breaking a promise has a few votes in my parliament, but not nearly as many as giving a speech about EA at UC Berkeley has, because I’m confident that a virtuous person would do that, and I’m somewhat confident it is good from a utilitarian standpoint too, so lots of votes.
I disagree that optimally satifying your moral preferences equals doing what is moral. For one thing you are not aware of all moral preferences that, on reflection you would agree with, for another, you could bias your dedication intensity in a way that even though you are acting on moral preferences, the outcome is not what is moral all things considered. Furthermore It is not obvious to me that a human is compelled necessarily to have all moral preferences that are “given” to them. You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.
Nino: I’m not sure where I stand on moral realism (leaning against but weakly). The non-moral realist part of me replies:
wouldn’t {move the universe into a state I prefer} and {act morally} necessarily be the same because you could define your own moral values to match your preferences?
Definitely not the same. First of all to participate in the moral discussion, there is some element of intersubjectivity that kicks in, which outright excludes defining my moral values to a priori equate my preferences, they may a posteriori do so, but the part where they are moral values involves clashing them against something, be it someone else, a society, your future self, a state of pain, or, in the case of moral realism, the moral reality out there.
To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences, which would tile the universe with whiteboards, geniuses, ecstatic dance, cuddlepiles, orgasmium, freckles, and the feeling of water in your belly when bodysurfing a warm wave at 3pm, among other things. I don’t see a problem with that, but I suppose you do, and that is why it is not moral.
If morality is intersubjective, there is discussion to be had. If it is fully subjective, you still need to determine in which way it is subjective, what a subject is, which operations transfer moral content between subjects if any, what legitimizes you telling me that my morality is subjective, and finally why call it morality at all if you are just talking about subjective preferences.
Why call it morality at all if you are just talking about subjective preferences.
Yeah, that is my current perspective, and I’ve found no meaningful distinction that would allow me to distinguish moral from amoral preferences. What you call intersubjective is something that I consider a strategic concern that follows from wanting to realize my moral preferences. I’ve wondered whether I should count the implications of these strategic concerns into my moral category, but that seemed less parsimonious to me. I’m wary of subjective things and want to keep them contained the same way I want to keep some ugly copypasted code contained, black-boxed in a separate module, so it has no effects on the rest of the code base.
There’s a distinction between me wanting X, and morality suggesting, requiring, or demanding X.
I like to use two different words here to make the distinction clearer, moral preferences and moral goals. In both cases you can talk about instrumental and terminal moral preferences/goals. This is how I prefer to distinguish goals from preferences (copypaste from my thesis):
To aid comprehension, however, I will make an artificial distinction of moral preferences and moral goals that becomes meaningful in the case of agent-relative preferences: two people with a personal profit motive share the same preference for profit but their goals are different ones since they are different agents. If they also share the agent-neutral preference for minimizing global suffering, then they also share the same goal of reducing it.
I’ll assume that in this case we’re talking about agent-neutral preferences, so I’ll just use goal here for clarity. If someone has the personal goal of wanting to get good at playing the theremin, then on Tuesday morning, when they’re still groggy from a night of coding and all out of coffee and Modafinil, they’ll want to stay in bed and very much not want to practice the theremin on one level but still want to practice the theremin on another level, a system 2 level, because they know that to become good at it, they’ll need to practice regularly. Here having practiced is an instrumental goal to the (perhaps) terminal goal of becoming good at playing the theremin. You could say that their terminal goal requires or demands them to practice even though they don’t want to. When I had to file and send out donation certificates to donors I was feeling the same way.
I can see Lewis, Chalmers and Muelhauser blushing at my failure.
Aw, hugs!
For one thing you are not aware of all moral preferences that, on reflection you would agree with.
Oops, yes. I should’ve specified that.
For another, you could bias your dedication intensity.
If I understand you correctly, then that is what I tried to capture by “optimally.”
You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.
This seems to me like a combination of the two limitations above. A person can decide to not act on moral preferences that they continue to entertain for strategic purposes, e.g., to more effectively cooperate with others on realizing another moral goal. When a person rejects, i.e. no longer entertains a moral preference (assuming such a thing can be willed), and optimally furthers other moral goals of theirs, then I’d say they are doing what is moral (to them).
To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences.
Cuddlepiles? Count me in! But these preferences also include “the most minds having the time of their lives.” I would put all these preferences on the same qualitative footing, but let’s say you care comparatively little about the whiteboards and a lot about the happy minds and the ecstatic dance. Let’s further assume that a lot of people out there are fairly neutral about the dance (at least so long as they don’t have to dance) but excited about the happy minds. When you decide to put the realization of the dance goal on the back-burner and concentrate on maximizing those happy minds, you’ll have an easy time finding a lot of cooperation partners, and together you actually have a bit of a shot of nudging the world in that direction. If you concentrated on the dance goal, however, you’d find much fewer partners and make much less progress, incurring a large opportunity cost in goal realization. Hence pursuing this goal would be less moral by (lacking) dint of its intersubjective tractability.
So yes, to recap, according to my understanding, everyone has, from your perspective, the moral obligation to satisfy your various goals. However, other people disagree, particularly on agent-relative goals but also at times on agent-neutral ones. Just as you require resources to realize your goals, you often also require cooperation from others, and costs and lacking tractability make some goals more and others less costly to attain. Hence, then moral thing to do is to minimize one’s opportunity cost in goal realization.
Please tell me if I’m going wrong somewhere. Thanks!
I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply “in a not extremely hard to coordinate way”?)
At large I’d say that you are talking about how to be an agenty Moral agent. I’m not sure morality requires being agenty, but it certainly benefits from it.
Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, but more to some that actually don’t have that great of a standing, and less to others which normally do the heavy lifting (don’t you love when philosophers talk about this “heavy lifting”?). So doing it non-optimally.
If you’re not arguing from a moral realist perspective, wouldn’t {move the universe into a state I prefer} and {act morally} necessarily be the same because you could define your own moral values to match your preferences?
If morality is subjective, the whole distinction between morals and preferences breaks down.
Telofy: Trying to figure out the direction of the inferential gap here. Let me try to explain, I don’t promise to succeed.
Aggregative consequentialist utilitarianism holds that people in general should value most minds having the times of their lives, where “in general” here actually translated into a “should” operator. A moral operator. There’s a distinction between me wanting X, and morality suggesting, requiring, or demanding X. Even if X is the same, different things can hold a relation to it.
At the moment I both hold a personal preference relation to you having a great time as I do a moral one. But if the moral one was dropped (as Williams makes me drop sevral of my moral reasons) I’d still have the personal one, and it supersedes the moral considerations that could arise otherwise.
Moral Uncertainty: To confess, that was my bad not disentangling uncertainty about my preferences that happen to be moral, my preferences that happen to coincide with preferences that are moral, and the preferences that morality would, say, require me to have. That was bad philosophy and on my part and I can see Lewis, Chalmers and Muelhauser blushing at my failure.
I meant uncertainty I have as an empirical subject in determining which of the reasons for argument I find are moral reasons or not, and within that which belong to which moral perspective. For instance I assign high credence that breaking a promise is bad from a Kantian standpoint, times a low credence that Kant was right about what is right. So not breaking a promise has a few votes in my parliament, but not nearly as many as giving a speech about EA at UC Berkeley has, because I’m confident that a virtuous person would do that, and I’m somewhat confident it is good from a utilitarian standpoint too, so lots of votes.
I disagree that optimally satifying your moral preferences equals doing what is moral. For one thing you are not aware of all moral preferences that, on reflection you would agree with, for another, you could bias your dedication intensity in a way that even though you are acting on moral preferences, the outcome is not what is moral all things considered. Furthermore It is not obvious to me that a human is compelled necessarily to have all moral preferences that are “given” to them. You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.
Nino: I’m not sure where I stand on moral realism (leaning against but weakly). The non-moral realist part of me replies:
Definitely not the same. First of all to participate in the moral discussion, there is some element of intersubjectivity that kicks in, which outright excludes defining my moral values to a priori equate my preferences, they may a posteriori do so, but the part where they are moral values involves clashing them against something, be it someone else, a society, your future self, a state of pain, or, in the case of moral realism, the moral reality out there.
To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences, which would tile the universe with whiteboards, geniuses, ecstatic dance, cuddlepiles, orgasmium, freckles, and the feeling of water in your belly when bodysurfing a warm wave at 3pm, among other things. I don’t see a problem with that, but I suppose you do, and that is why it is not moral.
If morality is intersubjective, there is discussion to be had. If it is fully subjective, you still need to determine in which way it is subjective, what a subject is, which operations transfer moral content between subjects if any, what legitimizes you telling me that my morality is subjective, and finally why call it morality at all if you are just talking about subjective preferences.
Thanks for bridging the gap!
Yeah, that is my current perspective, and I’ve found no meaningful distinction that would allow me to distinguish moral from amoral preferences. What you call intersubjective is something that I consider a strategic concern that follows from wanting to realize my moral preferences. I’ve wondered whether I should count the implications of these strategic concerns into my moral category, but that seemed less parsimonious to me. I’m wary of subjective things and want to keep them contained the same way I want to keep some ugly copypasted code contained, black-boxed in a separate module, so it has no effects on the rest of the code base.
I like to use two different words here to make the distinction clearer, moral preferences and moral goals. In both cases you can talk about instrumental and terminal moral preferences/goals. This is how I prefer to distinguish goals from preferences (copypaste from my thesis):
To aid comprehension, however, I will make an artificial distinction of moral preferences and moral goals that becomes meaningful in the case of agent-relative preferences: two people with a personal profit motive share the same preference for profit but their goals are different ones since they are different agents. If they also share the agent-neutral preference for minimizing global suffering, then they also share the same goal of reducing it.
I’ll assume that in this case we’re talking about agent-neutral preferences, so I’ll just use goal here for clarity. If someone has the personal goal of wanting to get good at playing the theremin, then on Tuesday morning, when they’re still groggy from a night of coding and all out of coffee and Modafinil, they’ll want to stay in bed and very much not want to practice the theremin on one level but still want to practice the theremin on another level, a system 2 level, because they know that to become good at it, they’ll need to practice regularly. Here having practiced is an instrumental goal to the (perhaps) terminal goal of becoming good at playing the theremin. You could say that their terminal goal requires or demands them to practice even though they don’t want to. When I had to file and send out donation certificates to donors I was feeling the same way.
Aw, hugs!
Oops, yes. I should’ve specified that.
If I understand you correctly, then that is what I tried to capture by “optimally.”
This seems to me like a combination of the two limitations above. A person can decide to not act on moral preferences that they continue to entertain for strategic purposes, e.g., to more effectively cooperate with others on realizing another moral goal. When a person rejects, i.e. no longer entertains a moral preference (assuming such a thing can be willed), and optimally furthers other moral goals of theirs, then I’d say they are doing what is moral (to them).
Cuddlepiles? Count me in! But these preferences also include “the most minds having the time of their lives.” I would put all these preferences on the same qualitative footing, but let’s say you care comparatively little about the whiteboards and a lot about the happy minds and the ecstatic dance. Let’s further assume that a lot of people out there are fairly neutral about the dance (at least so long as they don’t have to dance) but excited about the happy minds. When you decide to put the realization of the dance goal on the back-burner and concentrate on maximizing those happy minds, you’ll have an easy time finding a lot of cooperation partners, and together you actually have a bit of a shot of nudging the world in that direction. If you concentrated on the dance goal, however, you’d find much fewer partners and make much less progress, incurring a large opportunity cost in goal realization. Hence pursuing this goal would be less moral by (lacking) dint of its intersubjective tractability.
So yes, to recap, according to my understanding, everyone has, from your perspective, the moral obligation to satisfy your various goals. However, other people disagree, particularly on agent-relative goals but also at times on agent-neutral ones. Just as you require resources to realize your goals, you often also require cooperation from others, and costs and lacking tractability make some goals more and others less costly to attain. Hence, then moral thing to do is to minimize one’s opportunity cost in goal realization.
Please tell me if I’m going wrong somewhere. Thanks!
I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply “in a not extremely hard to coordinate way”?)
At large I’d say that you are talking about how to be an agenty Moral agent. I’m not sure morality requires being agenty, but it certainly benefits from it.
Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, but more to some that actually don’t have that great of a standing, and less to others which normally do the heavy lifting (don’t you love when philosophers talk about this “heavy lifting”?). So doing it non-optimally.