One thing which I think you should consider is the idea that one’s preferences become “tuned” to one’s moral beliefs. I would challenge the sentence in which you claim that “even if [virtue ethics/kant] were winning, I would still go there and pull that lever”...for wouldn’t the idea that virtue ethics is winning be contradicted by your choosing to pull the lever? How do we know when we are fully convinced by an ethical theory? We measure our conviction to follow it. If you are fully convinced of utilitarianism, for example, your preferences will reflect that—for how could you possibly prefer to not follow an ethical theory which you completely believe in? It is not possible to say something similar to “I know for certain that this is right, but I prefer not to do it”. What is really happening in a situation like this is that you actually give some ethical priority to your own preferences—hence you are partially an ethical egoist. To map this onto your situation, I would interpret your writing above as meaning that you are not fully convinced of the ethical theories you listed—you find that reason guides you to utilitarianism, Kantianism, whatever it may be, but you are overestimating your own certainty. You say that you take EA actions in spite of what is morally right to do. If you were truly convinced that something else were morally right, you would do it. Why wouldn’t you?
If I observe that you do something which qualifies as an EA action, and then ask you why you did it, you might say something like “Because it is my preference to do it, even though I know that X is morally right”, X being some alternative action. What I’m trying to say—apologies because this idea is difficult to communicate clearly—is that when you say “Because it is my preference”, you are offering your preference as valid justification for your actions. This form of justification is a principle of ethical egoism, so some non-zero percentage of your ethical commitments must be toward yourself. Even though you claimed to be certain that X is right, I have reason to challenge your own certainty, because of the justification you gave for the action. This is certainly a semantics issue in some sense, turning on what we consider to qualify as “belief” in an ethical system.
It seems that you feel the moral obligation strongly from your comment. Like the Oxford student cited by Krishna you don’t want to do what you want to do, you want to do what you oughtto do.
I don’t experience that feeling, so let me reply to your questions:
Wouldn’t virtue ethics winning be contradicted by your pulling the lever?
Not really, the pulling of the lever is what I would do, it is what I would think I have reason to do, but it is not what I think I would have moral reason to do. I would reason that a virtuous person (ex hypothesi) wouldn’t kill someone, that the moral thing to do is let the lever be. Then I would act on my preference that is stronger than my preference that the moral thing be done. The only case where a contradiction would arise is if you subscribe to all reasons for action being moral reasons, or moral reasons having the ultimate call in all action choice. I don’t.
In the same spirit, you suggest I’m an ethical egoist. This is because when you simulated me in this lever conflict, you think “morality comes first” so you dropped the altruism requirement to make my beliefs compatible with my action. When I reason however I think “morality is one of the things I should consider here” and it doesn’t win over my preference for most minds having an exulting time. So I go with my preference even when it is against morality.
This is orthogonal to Ethical Egoism, a position that I consider both despicable and naïve, to be frank. (Naïve because I know the subagents with whom I have personal identity care for themselves about more than just happiness or their preference satisfaction, and despicable because it is one thing to be a selfish prick, understandable in an unfair universe into which we are thrown into a finite life with no given meaning or sensible narrative, it is another thing to advocate a moral position in which you want everyone to be a selfish prick, and to believe that being a selfish prick is the right thing to do, that I find preposterous at a non-philosophical level.)
If you were truly convinced that something else were morally right, you would do it. Why wouldn’t you?
Because I don’t always do what I should do. In fact I nearly never do what is morally best. I try hard to not stay too far from the target, but I flinch from staring into the void almost as much as the average EA Joe. I really prefer knowing what the moral thing to do is in a situation, it is very informative and helpful to assess what I in fact will do, but it is not compelling above and beyond the other contextual considerations at hand. A practical necessity, a failure of reasoning, a little momentary selfishness, and an appreciation for aesthetic values have all been known to cause me to act for non-moral reasons at times. And of course I often did what I should do too. I often acted the moral way.
To reaffirm, we disagree on what Ethical Egoism means. I take it to be the position that individuals in general ought to be egoists (say, some of the time). You seem to be saying that , and furthermore that if I use any egoistic reason to justify my action, then merely in virtue of my using it as justification I mean that everyone should be (permitted to) doing the same. That makes sense if your conception of just-ice is contractualist and you were assuming just-ification has a strong connection to just-ice. From me to me, I take it to be a justification (between my selves perhaps), but from me to you, you could take it as an explanation of my behavior, to avoid the implications you assign to the concept of justification as demanding the choice for ethical egoism.
I’m not sure what my ethical (meta-ethical) position is, but I am pretty certain it isn’t, even in part, ethical egoism.
One thing which I think you should consider is the idea that one’s preferences become “tuned” to one’s moral beliefs. I would challenge the sentence in which you claim that “even if [virtue ethics/kant] were winning, I would still go there and pull that lever”...for wouldn’t the idea that virtue ethics is winning be contradicted by your choosing to pull the lever? How do we know when we are fully convinced by an ethical theory? We measure our conviction to follow it. If you are fully convinced of utilitarianism, for example, your preferences will reflect that—for how could you possibly prefer to not follow an ethical theory which you completely believe in? It is not possible to say something similar to “I know for certain that this is right, but I prefer not to do it”. What is really happening in a situation like this is that you actually give some ethical priority to your own preferences—hence you are partially an ethical egoist. To map this onto your situation, I would interpret your writing above as meaning that you are not fully convinced of the ethical theories you listed—you find that reason guides you to utilitarianism, Kantianism, whatever it may be, but you are overestimating your own certainty. You say that you take EA actions in spite of what is morally right to do. If you were truly convinced that something else were morally right, you would do it. Why wouldn’t you?
If I observe that you do something which qualifies as an EA action, and then ask you why you did it, you might say something like “Because it is my preference to do it, even though I know that X is morally right”, X being some alternative action. What I’m trying to say—apologies because this idea is difficult to communicate clearly—is that when you say “Because it is my preference”, you are offering your preference as valid justification for your actions. This form of justification is a principle of ethical egoism, so some non-zero percentage of your ethical commitments must be toward yourself. Even though you claimed to be certain that X is right, I have reason to challenge your own certainty, because of the justification you gave for the action. This is certainly a semantics issue in some sense, turning on what we consider to qualify as “belief” in an ethical system.
It seems that you feel the moral obligation strongly from your comment. Like the Oxford student cited by Krishna you don’t want to do what you want to do, you want to do what you oughtto do.
I don’t experience that feeling, so let me reply to your questions:
Not really, the pulling of the lever is what I would do, it is what I would think I have reason to do, but it is not what I think I would have moral reason to do. I would reason that a virtuous person (ex hypothesi) wouldn’t kill someone, that the moral thing to do is let the lever be. Then I would act on my preference that is stronger than my preference that the moral thing be done. The only case where a contradiction would arise is if you subscribe to all reasons for action being moral reasons, or moral reasons having the ultimate call in all action choice. I don’t.
In the same spirit, you suggest I’m an ethical egoist. This is because when you simulated me in this lever conflict, you think “morality comes first” so you dropped the altruism requirement to make my beliefs compatible with my action. When I reason however I think “morality is one of the things I should consider here” and it doesn’t win over my preference for most minds having an exulting time. So I go with my preference even when it is against morality. This is orthogonal to Ethical Egoism, a position that I consider both despicable and naïve, to be frank. (Naïve because I know the subagents with whom I have personal identity care for themselves about more than just happiness or their preference satisfaction, and despicable because it is one thing to be a selfish prick, understandable in an unfair universe into which we are thrown into a finite life with no given meaning or sensible narrative, it is another thing to advocate a moral position in which you want everyone to be a selfish prick, and to believe that being a selfish prick is the right thing to do, that I find preposterous at a non-philosophical level.)
Because I don’t always do what I should do. In fact I nearly never do what is morally best. I try hard to not stay too far from the target, but I flinch from staring into the void almost as much as the average EA Joe. I really prefer knowing what the moral thing to do is in a situation, it is very informative and helpful to assess what I in fact will do, but it is not compelling above and beyond the other contextual considerations at hand. A practical necessity, a failure of reasoning, a little momentary selfishness, and an appreciation for aesthetic values have all been known to cause me to act for non-moral reasons at times. And of course I often did what I should do too. I often acted the moral way.
To reaffirm, we disagree on what Ethical Egoism means. I take it to be the position that individuals in general ought to be egoists (say, some of the time). You seem to be saying that , and furthermore that if I use any egoistic reason to justify my action, then merely in virtue of my using it as justification I mean that everyone should be (permitted to) doing the same. That makes sense if your conception of just-ice is contractualist and you were assuming just-ification has a strong connection to just-ice. From me to me, I take it to be a justification (between my selves perhaps), but from me to you, you could take it as an explanation of my behavior, to avoid the implications you assign to the concept of justification as demanding the choice for ethical egoism.
I’m not sure what my ethical (meta-ethical) position is, but I am pretty certain it isn’t, even in part, ethical egoism.