Instead of “utilitarianism as the One True Theory,” we consider it as “utilitarianism as a personal, morally-inspired life goal...
”While this concession is undoubtedly frustrating, proclaiming others to be objectively wrong rarely accomplished anything anyway. It’s not as though moral disagreements—or disagreements in people’s life choices—would go away if we adopted moral realism.
If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a ‘personal life goal’ makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.
Speaking as someone inclined towards moral realism, the most inspiring presentations I’ve ever seen of anti-realism are those given by Peter Singer in The Expanding Circle and Eliezer Yudkowsky in his metaethics sequence. Probably not by coincidence—both of these people are inclined to be realists. Eliezer said as much, and Singer later became a realist after reading Parfit. Eliezer Yudkowsky on ‘The Meaning of Right’:
The apparent objectivity of morality has just been explained—and not explained away. For indeed, if someone slipped me a pill that made me want to kill people, nonetheless, it would not be right to kill people. Perhaps I would actually kill people, in that situation—but that is because something other than morality would be controlling my actions.
Morality is not just subjunctively objective, but subjectively objective. I experience it as something I cannot change. Even after I know that it’s myself who computes this 1-place function, and not a rock somewhere—even after I know that I will not find any star or mountain that computes this function, that only upon me is it written—even so, I find that I wish to save lives, and that even if I could change this by an act of will, I would not choose to do so. I do not wish to reject joy, or beauty, or freedom. What else would I do instead? I do not wish to reject the Gift that natural selection accidentally barfed into me.
And Singer in the Expanding Circle:
“Whether particular people with the capacity to take an objective point of view actually do take this objective viewpoint into account when they act will depend on the strength of their desire to avoid inconsistency between the way they reason publicly and the way they act.”
These are both anti-realist claims. They define ‘right’ descriptively and procedurally as arising from what we would want to do under some ideal circumstances, and rigidifies on the output of that idealization, not on what we want. To a realist, this is far more appealing than a mere “personal, morally-inspired life goal”, and has the character of ‘external moral constraint’, even if it’s not really ultimately external, but just the result of immovable or basic facts about how your mind will, in fact work, including facts about how your mind finds inconsistencies in its own beliefs. This is a feature, not a bug:
According to utilitarianism, what people ought to spend their time on depends not on what they care about but also on how they can use their abilities to do the most good. What people most want to do only factors into the equation in the form of motivational constraints, constraints about which self-concepts or ambitious career paths would be long-term sustainable. Williams argues that this utilitarian thought process alienates people from their actions since it makes it no longer the case that actions flow from the projects and attitudes with which these people most strongly identify...
The exact thing that Williams calls ‘alienating’ is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this ‘alienation’ if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you’d reframe epistemic or practical reasoning on the anti-realist view. Then it seems more ‘external’ and less relativistic.
One thing this framing makes clearer, which you don’t deny but don’t mention, is that anti-realism does not imply relativism.
In that case, normative discussions can remain fruitful. Unfortunately, this won’t work in all instances. There will be cases where no matter how outrageous we find someone’s choices, we cannot say that they are committing an error of reasoning.
What we can say, on anti-realism as characterised by Singer and Yudkowsky, is that they are making an error of morality. We are not obligated (how could we be?) towards relativism, permissiveness or accepting values incompatible with our own on anti-realism. Ultimately, you can just say that ‘I am right and you are wrong’.
That’s one of the major upsides of anti-realism to the realist—you still get to make universal, prescriptive claims and follow them through, and follow them through because they are morally right, and if people disagree with you then they are morally wrong and you aren’t obligated to listen to their arguments if they arise from fundamentally incompatible values. Put that way, anti-realism is much more appealing to someone with realist inclinations.
The exact thing that Williams calls ‘alienating’ is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this ‘alienation’ if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you’d reframe epistemic or practical reasoning on the anti-realist view. Then it seems more ‘external’ and less relativistic.
Nice point!
If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a ‘personal life goal’ makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.
Yeah, I think that’s a good suggestion. I had a point about “arguments can’t be unseen” – which seems somewhat related to the alienation point.
I didn’t quite want to imply that morality is just a life goal. There’s a sense in which morality is “out there” – it’s just more underdetermined than the realists think, and maybe more goes into whether or not to feel compelled to dedicate all of one’s life to other-regarding concerns.
I emphasize this notion of “life goals” because it will play a central role later on in this sequence. I think it’s central to all of normativity. Back when I was a moral realist, I used to say “ethics is about goals” and “everything is ethics.” There’s this position “normative monism” that says all of normativity is the same thing. I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one’s psychology one identifies with.)
I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one’s psychology one identifies with.)
Normative realism implies identification with system 2
...
I find this very interesting because locating personal identity in system 1 feels conceptually impossible or deeply confusing. No matter how much rationalization goes on, it never seems intuitive to identify myself with system 1. How can you identify with the part of yourself that isn’t doing the explicit thinking, including the decision about which part of yourself to identify with? It reminds me of Nagel’s The Last Word.
My point here was that if you are a realist about normativity of any kind, you have to identify with system 2 as that is what makes the (potentially correct) judgements about what you ought to do.
But that’s not to say that if you are antirealist, you have to identify with system 1. If you are an antirealist, then in some sense (the realist sense) you don’t have to identify with anything, but how easy and natural it is to identify with system 2 depends on how much importance you place on coherence among your values, which in turn depends on how coherent and universalizable your values actually are—you can be an antirealist but accept that some fairly strong degree of convergence does occur in practice, for whatever reason. This:
target criteria can differ between people, and are often underdetermined
seems to imply that you don’t think there will be much convergence practically, or that we should feel a strong pressure to reach high-level agreement on moral questions because such a project is never going to succeed.
I think this is part of the motivation for your ‘case for suffering focussed ethics’ - even though any asymmetry between preventing suffering and producing happiness falls victim to the absurd conclusion and paralysis argument, I’m assuming that this wouldn’t bother you much.
I talk about why, regardless of whether realism is true, I think this is an unstable position in that post.
AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it’s about the implications of a pair of views. As Will says in the transcript you linked:
“but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit… And if you have those two claims, then you’ve got to conclude [along the lines of the paralysis argument]”.
Also, I’m not sure how Lukas would reply but I think one way of defending his claim which you criticize, namely that “the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled”, is by appealing to the existence of impossibility theorems in ethics. In that case we truly won’t be able to avoid counterintuitive results (see e.g. Arrhenius 2000, Greaves 2017). This also shouldn’t surprise us too much if we agree with the evolved nature of some of our moral intuitions.
How to make anti-realism existentially satisfying
If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a ‘personal life goal’ makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.
Speaking as someone inclined towards moral realism, the most inspiring presentations I’ve ever seen of anti-realism are those given by Peter Singer in The Expanding Circle and Eliezer Yudkowsky in his metaethics sequence. Probably not by coincidence—both of these people are inclined to be realists. Eliezer said as much, and Singer later became a realist after reading Parfit. Eliezer Yudkowsky on ‘The Meaning of Right’:
And Singer in the Expanding Circle:
These are both anti-realist claims. They define ‘right’ descriptively and procedurally as arising from what we would want to do under some ideal circumstances, and rigidifies on the output of that idealization, not on what we want. To a realist, this is far more appealing than a mere “personal, morally-inspired life goal”, and has the character of ‘external moral constraint’, even if it’s not really ultimately external, but just the result of immovable or basic facts about how your mind will, in fact work, including facts about how your mind finds inconsistencies in its own beliefs. This is a feature, not a bug:
The exact thing that Williams calls ‘alienating’ is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this ‘alienation’ if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you’d reframe epistemic or practical reasoning on the anti-realist view. Then it seems more ‘external’ and less relativistic.
One thing this framing makes clearer, which you don’t deny but don’t mention, is that anti-realism does not imply relativism.
What we can say, on anti-realism as characterised by Singer and Yudkowsky, is that they are making an error of morality. We are not obligated (how could we be?) towards relativism, permissiveness or accepting values incompatible with our own on anti-realism. Ultimately, you can just say that ‘I am right and you are wrong’.
That’s one of the major upsides of anti-realism to the realist—you still get to make universal, prescriptive claims and follow them through, and follow them through because they are morally right, and if people disagree with you then they are morally wrong and you aren’t obligated to listen to their arguments if they arise from fundamentally incompatible values. Put that way, anti-realism is much more appealing to someone with realist inclinations.
Nice point!
Yeah, I think that’s a good suggestion. I had a point about “arguments can’t be unseen” – which seems somewhat related to the alienation point.
I didn’t quite want to imply that morality is just a life goal. There’s a sense in which morality is “out there” – it’s just more underdetermined than the realists think, and maybe more goes into whether or not to feel compelled to dedicate all of one’s life to other-regarding concerns.
I emphasize this notion of “life goals” because it will play a central role later on in this sequence. I think it’s central to all of normativity. Back when I was a moral realist, I used to say “ethics is about goals” and “everything is ethics.” There’s this position “normative monism” that says all of normativity is the same thing. I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one’s psychology one identifies with.)
I think that you were referring to this?
My point here was that if you are a realist about normativity of any kind, you have to identify with system 2 as that is what makes the (potentially correct) judgements about what you ought to do.
But that’s not to say that if you are antirealist, you have to identify with system 1. If you are an antirealist, then in some sense (the realist sense) you don’t have to identify with anything, but how easy and natural it is to identify with system 2 depends on how much importance you place on coherence among your values, which in turn depends on how coherent and universalizable your values actually are—you can be an antirealist but accept that some fairly strong degree of convergence does occur in practice, for whatever reason. This:
seems to imply that you don’t think there will be much convergence practically, or that we should feel a strong pressure to reach high-level agreement on moral questions because such a project is never going to succeed.
I think this is part of the motivation for your ‘case for suffering focussed ethics’ - even though any asymmetry between preventing suffering and producing happiness falls victim to the absurd conclusion and paralysis argument, I’m assuming that this wouldn’t bother you much.
I talk about why, regardless of whether realism is true, I think this is an unstable position in that post.
AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it’s about the implications of a pair of views. As Will says in the transcript you linked:
“but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit… And if you have those two claims, then you’ve got to conclude [along the lines of the paralysis argument]”.
Also, I’m not sure how Lukas would reply but I think one way of defending his claim which you criticize, namely that “the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled”, is by appealing to the existence of impossibility theorems in ethics. In that case we truly won’t be able to avoid counterintuitive results (see e.g. Arrhenius 2000, Greaves 2017). This also shouldn’t surprise us too much if we agree with the evolved nature of some of our moral intuitions.