You suggest antirealism has undesirable properties, then say:
But moral antirealism is ultimately a doctrine of conflict—if reason has no place in motivational discussion, then all that’s left for me to get my way from you is threats, emotional manipulation, misinformation and, if need be, actual violence. Any antirealist who denies this as the implication of their position is kidding themselves (or deliberately supplying misinformation).
I am a moral antirealist. I don’t think I endorse a position that is a “doctrine of conflict.” However, it’s hard to assess why you think this. You suggest that antirealism entails that “reason has no place in motivational discussion.” I’m not quite sure what you mean by this, but I don’t think reason has no place. Why would I think that? Perhaps you are thinking of “reason” differently than I am? If you could elaborate on what you are claiming here, and how you are thinking of some of these notions, that would be helpful.
As an antirealist, I don’t rely on threats or emotional manipulation or misinformation any more than anyone else, and I don’t know why I would if I were an antirealist. I don’t think antirealism has anything to do with having to rely any more on any of these than moral realism does. Why would it?
To be clear, I’m talking about when an antirealist wants a behaviour change from another person (that, by definition, that person isn’t currently inclined to do). Say you wanted to persuade me to vote for a particular political candidate. If you were a moral realist, you’d have these classes of option:
Present me with real data that shows that on my current views, it would benefit me to vote for them
Present me with false or cherrypicked data that shows that on my current views, it would benefit me to vote for them
Threaten me if I don’t vote for them
Emotionally cajole me into voting for them, e.g. by telling me they saved my cat, that their opponent is a lecher, etc—in some way highlighting some trait that will irrationally dispose me towards them
Feign belief in moral view that I hold and show me that their policies are more aligned with it
Show me that their policies are more aligned with a moral view that we both in fact share
Persuade me to accept whatever (you think) is the ‘correct’ moral view, and show me that their policies are aligned with it
Perhaps others, and perhaps the 2-5 are basically the same thing, but whatever. As a moral antirealist you don’t have access to the last two. And without those, the only honest/nonviolent option you have to persuade me is not going to be available to you the majority of the time, since usually I’m going to be better informed than you about what things are in fact good for me.
This isn’t to say that moral antirealists necessarily will manipulate/threaten etc—I know many antirealists who seem like ‘good’ people who would find manipulating other people for personal gain grossly unpleasant. But nonetheless, taking away the last two options without replacing them with something equally honest necessarily incentivises the remaining set, most of which and the most accessible of which are dishonest.
This isn’t supposed to be a substantial argument for moral realism, but I think it’s an argument against antirealism. As an antirealist it would nonetheless be far better for you to live in a world where the 6th and 7th options were possible. So if you reject moral realism, you prudentially should nonetheless favour finding a third option, that permits similarly nonmanipulative options.
(Though, sidebar: while it’s easy to dismiss the desirability of this property as a distraction from the ‘truth’ of the debate, I think this is too simplistic. At the level of abstraction at which moral philosophy happens, ‘truth’ is also a somewhat murky notion, and one we don’t have access to. We can say we have beliefs, but even those are a form of action, and hence motivated. So it’s unclear to me what lies at the bottom of this pyramid, but I don’t think the view that morality/motivation is a form of knowledge and thus undergirded by epistemology makes any sense)
I’m not sure how either of the last two are harder to explain on an anti-realist view than a moral realist view. I don’t think anti-realists would accept that they aren’t possible on their view.
Show me that their policies are more aligned with a moral view that we both in fact share
This makes sense on a lot of anti-realist views. Anti-realists don’t think that people don’t have dispositions that are well described as moral. It’s possible to share dispositions, and in fact we all empirically do share a lot of moral dispositions.
Persuade me to accept whatever (you think) is the ‘correct’ moral view, and show me that their policies are aligned with it
Also seems fine on an anti-realist view. I don’t see how persuading is easier for a moral realist, surely you would still need to appeal to something that your interlocutor already believes/ values.
I would characterise antirealism as something like ‘believing that there is no normative space, and hence no logical or empirical line of reasoning you could give to change someone’s motivations.’
I don’t think anti-realists would accept that they aren’t possible on their view.
I would be interested to hear a counterexample that isn’t a language game. I don’t see how one can sincerely advocate someone else hold a position they think is logically indefensible.
Anti-realists don’t think that people don’t have dispositions that are well described as moral. It’s possible to share dispositions, and in fact we all empirically do share a lot of moral dispositions.
I think this is a language game. A ‘disposition’ is not the same phenomenon that someone who believes their morality has some logical/empirical basis thinks their morality is. A disposition isn’t functionally distinct from a preference—something we can arguably share, but per Hume, something which has nothing to do do with reason.
Someone who believed in a moral realist view that valued a state whose realisation they would never experience—black ties at their own funeral, for instance—should be highly sceptical of a moral antirealist who claimed to value the same state even though they also wouldn’t experience it. The realist believes the word ‘value’ in that sentence means something motivationally relevant to a moral realist. To an antirealist it can only mean something like ‘pleasing to imagine’. But if they won’t be at the funeral, they won’t know whether the state was realised, and so they can get their pleasure just imagining it happen—it doesn’t otherwise matter to them whether it does.
Not by coincidence I think, this arguably gives the antirealist access to a basically hedonistic quasi-morality in practice (though no recourse to defend it), but not to any common alternative.
I don’t see how persuading is easier for a moral realist, surely you would still need to appeal to something that your interlocutor already believes/ values.
If you start with the common belief that there is such some ‘objective’ morality and some set of steps of reasoning tools that would let us access it, you can potentially correct the other’s use of those tools in good faith. If one of you doesn’t actually believe that process is even possible, it would be disingenuous to suppose there’s something even to correct.
***
FWIW, we’re spilling a lot of ink over by far the least interesting part of my initial comment. I would expect it to be more productive to talk about e.g.:
The analogy of (the irrelevance of) moral realism to (the irrelevance of) mathematical realism/Platonism
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greene’s argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophies—given the infinite possible moral philosophies and how antirealists can explain this
My suggestion that this strongly suggests a process by which some or most moral philosophies can be excluded: does this seem false? Or true, but insufficiently powered to narrow the picture down?
My suggestion that iterative self-modification of one’s motivations might converge: whether people disagree with this suggestion or agree but think the phenomenon is explicable in e.g. strictly physical terms or otherwise uninteresting
My suggestion that if we accept that motivation has its own set of axiom-like properties, we might be able to ‘derive’ quasi-moral views in the same way we can derive properties about applied maths or physics (i.e. not that they’re necessarily ‘true’ whatever that means, but that we will necessarily behave in ways that in some sense assume them to be)
I would characterise antirealism as something like ‘believing that there is no normative space, and hence no logical or empirical line of reasoning you could give to change someone’s motivations.’
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts. I don’t think it logically entails any particular normative implications at all, so I do not think it has no “normative space” or “no logical or empirical line of reasoning” you could give to change someone’s motivations.
I would be interested to hear a counterexample that isn’t a language game. I don’t see how one can sincerely advocate someone else hold a position they think is logically indefensible.
I’m not sure what you have in mind by a language game, but you gave this as an example of something an antirealist has no access to: “Show me that their policies are more aligned with a moral view that we both in fact share.”
Why wouldn’t an antirealist have access to this? There’s a few obvious counterexamples to this. Here’s one: cultural relativism. If two people are cultural relativists and are members of the same culture, one of them could readily convince the other that a policy is more in line with the moral standards of the culture than some other policy. The same generalizes to other antirealist positions, such as various constructivist views.
Someone who believed in a moral realist view that valued a state whose realisation they would never experience—black ties at their own funeral, for instance—should be highly sceptical of a moral antirealist who claimed to value the same state even though they also wouldn’t experience it.
Why should they be skeptical? I am a moral antirealist and I have all kinds of preferences that are totally unrelated to my own experiences. I want my daughter to go on to live a happy life long after I am dead, and I would actively sacrifice my own welfare to ensure this would be the case even if I’d never experience it. I don’t believe I do this because I’d feel happy knowing the sacrifice was made; I’d do it because I value more than just my own experiences.
I see no legitimate reason for realists to be skeptical of antirealists who have values like this. There is nothing special about valuing experiences I’d realize that prioritizes them over ones I won’t.
To an antirealist it can only mean something like ‘pleasing to imagine’.
That isn’t what that means to me, so I do not think this is correct. If you think this is some kind of logical entailment of moral antirealism, I’d be interested in seeing an attempt at showing a contradiction were an antirealist to think otherwise, or some other means of demonstrating that this follows from antirealism.
But if they won’t be at the funeral, they won’t know whether the state was realised, and so they can get their pleasure just imagining it happen—it doesn’t otherwise matter to them whether it does.
When performing an action, my goal is to achieve the desired outcome. I don’t have to experience the outcome to be motivated to perform the action.
Not by coincidence I think, this arguably gives the antirealist access to a basically hedonistic quasi-morality in practice (though no recourse to defend it), but not to any common alternative.
I don’t endorse this view, and I deny as an antirealist I’d have any need to “defend” a moral standard. This to me sounds a bit like suggesting I’d be unable to defend what my favorite color is; which is true, I just don’t think my color preferences require any sort of defense.
If you start with the common belief that there is such some ‘objective’ morality and some set of steps of reasoning tools that would let us access it, you can potentially correct the other’s use of those tools in good faith. If one of you doesn’t actually believe that process is even possible, it would be disingenuous to suppose there’s something even to correct.
This still may or may not be connected to anyone’s motivations. I don’t care at all what the moral facts are. I only act based on my own preferences, and I have absolutely no desire whatsoever to do whatever is stance-independently moral.
***
I’d be happy to talk about the other claims in the sidebar as well but I’m not sure I understand some of them. Can you elaborate on these?
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greene’s argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophies—given the infinite possible moral philosophies and how antirealists can explain this
What is it antirealists are supposed to explain, specifically?
Also, I don’t intend to be argumentative about literally everything, but some of us may find other aspects of these topics more interesting than you, so which of these topics is most interesting can vary.
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts.
I don’t understand the difference, which is kind of the problem I identified in the first place. It’s difficult to reject the existence of a phenomenon you haven’t defined. (the concept of ignosticism applies here). ‘Moral facts’ sound to me something like ‘the truth values behind normative statements’ (though that has further definitional problems relating to both ‘truth values’ - cf my other most recent reply—and ‘normative statements’)
If you reject that definition, it might be more helpful to define moral facts by exclusion from seemingly better understood phenomena. For example, I think more practical definitions might be:
Nonphysical phenomena
Nonphysical and nonexperiential phenomena
Obviously this has the awkwardness of including some paranormal phenomena, but I don’t think that’s a huge cost. Many paranormal phenomena obviously would be physical, were they to exist (as in, they can exert force, have mass etc), and you and I can probably agree we’re not that interested in the particular nonexistences of most of the rest.
I have all kinds of preferences that are totally unrelated to my own experiences
I wrote a long essay about the parameters of ‘preference’ in the context of preference utilitarianism here, which I think equally applies to supposedly nonmoral uses of the word (IIRC I might have shown it to you before?). The potted version is that people frequently use the word in a very motte-and-bailey-esque fashion, sometimes invoking quasi magical properties of preferences, other times treating them as an unremarkable part of the physical or phenomenal world. I think that’s happening here:
> {cultural relativism … daughter} examples
There’s a relatively simple experientialist account of these, which goes ‘people pursue their daughter’s/culture’s wellbeing because it gives them some form of positive valence to do so’. This is the view which I accuse of being a conflict doctrine (unless it’s paired with some kind of principled pursuit of such positive valence elsewhere).
You seem to be saying your view is not this: ‘I’d do it because I value more than just my own experiences’.
If this is true, then I think many of my criticisms don’t apply to you—but I also think this is a very selective notion of antirealism. Specifically, it requires a notion of ‘to value’, which you’re saying is *not* exclusively experiential (and presumably isn’t otherwise entirely physical too—unless you say its nonexperiential components are just a revealed preference in your behaviour?).
Perhaps you just mean a more expansive notion of experiential value than the word ‘happiness’ implies. I use the latter to mean ‘any positively valenced experience’, fwiw—I don’t think the colloquial distinction is philosophically interesting. But that puts you back in the ‘doctrine of conflict’ camp, if you aren’t able to guide someone, through dispassioned argument, to value your daughter/culture the way you do if they don’t already.
For the record, I am not claiming that a large majority of persuasion falls into the 6th/7th groups. I think it’s a tiny minority of it in fact—substiantially less than the amount which is e.g. demonstrating how to think logically or understand statistics, or persuading someone to change their mind with logic or statistical data, both of which are already miniscule.
But the difference between antirealism and exclusivism/realism is that antirealism excludes the possibility of such interactions entirely.
When performing an action, my goal is to achieve the desired outcome. I don’t have to experience the outcome to be motivated to perform the action.
But you have no access to whether the outcome is achieved, only to your phenomenal experience of changing belief that it will/won’t be or has/hasn’t been. So if you don’t recognise the valence of that process of changing belief as the driver of your motivation and instead assert that some nonphysical link between your behaviour and the outcome is driving you, then under the exclusionary definition of moral facts you appear to be invoking one.
Can you elaborate on these?
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greene’s argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophies—given the infinite possible moral philosophies and how antirealists can explain this
What is it antirealists are supposed to explain, specifically?
When we see a predictable pattern in the world, we generally understand it to be the result of some underlying law or laws, such that if you knew everything about the universe you could in principle predict the pattern before seeing it.
It seems basically impossible to explain the convergence towards the philosophies above by any law currently found in physical science. Evolutionary processes might drive people to protect their kin, deter aggressors etc, but there’s no need for any particular cognitive or emotional oattachment to the ‘rightness’ of this (there’s no obvious need for any emotional state at all, really, but even given that we have them they might have been entirely supervenient on behaviour, or universally tended towards cold pragmatism or whatever). And evolutionary process have no ability to explain a universally impartial philosophy like utiltiarianism, which is actively deleterious to its proponents’ survival and reproductive prospects.
So what are the underlying laws by which one could have predicted the convergence of moral philosophies, rather than just virtue signalling and similar behaviours, in particular to a set including utilitarianism?
Why wouldn’t 6 be available to an antirealist? If I’m a utilitarian and they’re a utilitarian, I could convince them a course of action would maximize utility. This would be a bit like (1): convincing them that a course of action would be consistent with their values.
If by (7) what you mean by “correct” is demonstrating that a course of action is in line with the stance-independent moral facts, an antirealist couldn’t sincerely attempt to do that, but I don’t think this carries any significant practical implications.
And without those, the only honest/nonviolent option you have to persuade me is not going to be available to you the majority of the time, since usually I’m going to be better informed than you about what things are in fact good for me.
I don’t think I need to have better access to someone’s values to make a compelling case. For instance, suppose I’m running a store and someone breaks in with a gun and demands I empty the cash register. I don’t have to know what their values are better than they do to point out that they are on lots of security cameras, or that the police are on their way, and so on. It isn’t that hard to appeal to people’s values when convincing them. We do this all the time.
And, for what it’s worth, I think that in practice the vast majority of the time (in fact, personally, I suspect virtually all the time except for rare cases of weird philosophers) what people are doing is appealing to a person’s own values, not attempting to convince them that their values are misaligned with the stance independent moral facts.
Part of the reason for this is that I don’t think most people are moral realists, so it wouldn’t make sense for them to argue on behalf of moral realism or to appeal to others under the presumption that they are moral realists.
Another reason I think this is because I don’t think the move from convincing someone what the stance-independent moral facts are to them acting in any particular way is that straightforward. You’d have to make a separate case for motivational internalism to show that convincing them is enough to motivate them, while if you instead abandon this, it’s possible the people you’re convincing can be persuaded of what the stance-independent moral facts are, but simply not care.
Speaking for myself, arguing for moral realism would have absolutely no impact on me. I don’t simply reject moral realism. I also deny that if there were stance-independent moral facts, that I’d have any motivation to comply with them (of course, I could be wrong about that). If I’m right, and if I have accurately introspected on my own values, then merely knowing something is stance-independently wrong wouldn’t change what I do at all. I simply don’t care if something is stance-independently moral or immoral. So why would persuading me of that matter?
Whether or not antirealists must rely, in practice, any more so on threats or manipulation than moral realists is an open empirical question. I predict that they don’t. If I had to make predictions, I’d instead predict that moral realists are more likely to threaten or manipulate people to comply with whatever they take the stance-independent moral facts to be. That at least strikes me as a viable alternative hypothesis. Either way, this is an empirical question, and I don’t know of any evidence that antirealists are in a worse position than realists. As an aside: even if they were, that wouldn’t be a good reason to reject the truth of moral antirealism. Reality may simply not include stance-independent moral facts. Even if that foreclosed one mode of persuasion, well, too bad! That’s how reality is.
As an aside, I think this remark:
This isn’t to say that moral antirealists necessarily will manipulate/threaten etc—I know many antirealists who seem like ‘good’ people who would find manipulating other people for personal gain grossly unpleasant.
…carries the pragmatic implication that antirealists are more likely to be immoral people that threaten or manipulate others. Do you agree?
This isn’t supposed to be a substantial argument for moral realism, but I think it’s an argument against antirealism.
What exactly is the argument against antirealism? Antirealists cannot honestly appeal to stance-independent moral facts when persuading others. I agree with that. But I don’t know why that should be taken as an argument against moral antirealism.
As an antirealist it would nonetheless be far better for you to live in a world where the 6th and 7th options were possible.
Well, I think the 6th collapses into the first and that the 7th has no practical benefits, so I’m not persuaded this is true. I do not think we’d be better off in any way at all if moral realism is true, and I am not convinced you’ve shown that we would be.
More generally, I simply deny anything about antirealism leaves in an especially weak position to rely on threats or manipulation. Antirealists can appeal to people’s values. And I think moral realists would have to do exactly the same thing. If the person in question doesn’t care about what’s true or isn’t motivated by what’s moral, then the realist is going to be in the exact same boat as the antirealist. The only thing the realist does is saddle themselves with more steps.
I don’t think I need to have better access to someone’s values to make a compelling case. For instance, suppose I’m running a store and someone breaks in with a gun and demands I empty the cash register. I don’t have to know what their values are better than they do to point out that they are on lots of security cameras, or that the police are on their way, and so on. It isn’t that hard to appeal to people’s values when convincing them. We do this all the time.
This is option 1: ‘Present me with real data that shows that on my current views, it would benefit me to vote for them’. Sometimes it’s available, but usually it isn’t.
Even if that foreclosed one mode of persuasion, well, too bad! That’s how reality is.
‘Too bad! That’s how reality is’ is analogous to the statement ‘too bad! That’s how morality is’ in its lack of foundation. ‘Reality’ and ‘truth’ are not available to us. What we have is a stream of valenced sensory input whose nature seems to depend somewhat on our behaviours. In general, we change our behaviour in such a way to to get better valenced sensory input, such as ‘not feeling cognitive dissonance’, ‘not being in extreme physical pain’, ‘getting the satisfaction of symbols lining up in an intuitive way’, ‘seeing our loved ones prosper’ etc.
At a ‘macroscopic’ level, this sensory input generally resolves into mental processes approximately like ‘”believing” that there are “facts”, about which our beliefs can be “right” or “wrong”’ and ‘it’s generally better to be right about stuff’, and ‘logicians, particle physicists and perhaps hedge fund managers are generally more right about stuff than religious zealots’. But this is all ultimately pragmatic. If cognitive dissonance didn’t feel good us, and if looking for consistency in the world didn’t seem to lead to nicer outcomes, we wouldn’t do it, and we wouldn’t care about rightness—and there’s no fundamental sense in which it would be correct or even meaningful to say that we were wrong.
I’m not sure this matters for the question of how reasonable we should think antirealism is—it might be a few levels less abstract than such concerns. But I don’t think it’s entirely obvious that it doesn’t, given the vagary to which I keep referring about what it would even mean for either moral pro-or-anti-realism to be correct. It might turn out that the least abstract principle we can judge it by is how we feel about its sensory consequences.
…carries the pragmatic implication that antirealists are more likely to be immoral people that threaten or manipulate others. Do you agree?
Eh, compared to who? I think most people are neither realists nor antirealists since they haven’t built the linguistic schema for either position to be expressable (I’m claiming that it’s not even possible to do so, but that’s neither here nor there). So antirealists are obviously heavily selected to be a certain type of nerd, which probably biases their population towards a generally nonviolent, relatively scrupulous and perhaps affable disposition.
But selection is different than cause, and I would guess among that nerdgroup, being utilitarian tends to cause one to be fractionally more likely to promote global utility, being contractualist tends to cause one to be fractionally more likely to uphold the social contract etc (I’m aware of the paper arguing that moral philosophers don’t seem to be particularly moral, but that was hardly robust science. And fwiw it vaguely suggests that older books—which bias heavily nonutilitarian—tempted more immorality).
The alternative is to believe that such people are all completely uninfluenced by their phenomenal experience of ‘belief’ in those philosophies, or that many of them are lying about having it (plausible, but leaves open the question of the effects of belief on behaviour of the ones who aren’t), or some othersuch surprising disjunction between their mental state and behaviour.
You suggest antirealism has undesirable properties, then say:
I am a moral antirealist. I don’t think I endorse a position that is a “doctrine of conflict.” However, it’s hard to assess why you think this. You suggest that antirealism entails that “reason has no place in motivational discussion.” I’m not quite sure what you mean by this, but I don’t think reason has no place. Why would I think that? Perhaps you are thinking of “reason” differently than I am? If you could elaborate on what you are claiming here, and how you are thinking of some of these notions, that would be helpful.
As an antirealist, I don’t rely on threats or emotional manipulation or misinformation any more than anyone else, and I don’t know why I would if I were an antirealist. I don’t think antirealism has anything to do with having to rely any more on any of these than moral realism does. Why would it?
Hey Lance,
To be clear, I’m talking about when an antirealist wants a behaviour change from another person (that, by definition, that person isn’t currently inclined to do). Say you wanted to persuade me to vote for a particular political candidate. If you were a moral realist, you’d have these classes of option:
Present me with real data that shows that on my current views, it would benefit me to vote for them
Present me with false or cherrypicked data that shows that on my current views, it would benefit me to vote for them
Threaten me if I don’t vote for them
Emotionally cajole me into voting for them, e.g. by telling me they saved my cat, that their opponent is a lecher, etc—in some way highlighting some trait that will irrationally dispose me towards them
Feign belief in moral view that I hold and show me that their policies are more aligned with it
Show me that their policies are more aligned with a moral view that we both in fact share
Persuade me to accept whatever (you think) is the ‘correct’ moral view, and show me that their policies are aligned with it
Perhaps others, and perhaps the 2-5 are basically the same thing, but whatever. As a moral antirealist you don’t have access to the last two. And without those, the only honest/nonviolent option you have to persuade me is not going to be available to you the majority of the time, since usually I’m going to be better informed than you about what things are in fact good for me.
This isn’t to say that moral antirealists necessarily will manipulate/threaten etc—I know many antirealists who seem like ‘good’ people who would find manipulating other people for personal gain grossly unpleasant. But nonetheless, taking away the last two options without replacing them with something equally honest necessarily incentivises the remaining set, most of which and the most accessible of which are dishonest.
This isn’t supposed to be a substantial argument for moral realism, but I think it’s an argument against antirealism. As an antirealist it would nonetheless be far better for you to live in a world where the 6th and 7th options were possible. So if you reject moral realism, you prudentially should nonetheless favour finding a third option, that permits similarly nonmanipulative options.
(Though, sidebar: while it’s easy to dismiss the desirability of this property as a distraction from the ‘truth’ of the debate, I think this is too simplistic. At the level of abstraction at which moral philosophy happens, ‘truth’ is also a somewhat murky notion, and one we don’t have access to. We can say we have beliefs, but even those are a form of action, and hence motivated. So it’s unclear to me what lies at the bottom of this pyramid, but I don’t think the view that morality/motivation is a form of knowledge and thus undergirded by epistemology makes any sense)
I’m not sure how either of the last two are harder to explain on an anti-realist view than a moral realist view. I don’t think anti-realists would accept that they aren’t possible on their view.
This makes sense on a lot of anti-realist views. Anti-realists don’t think that people don’t have dispositions that are well described as moral. It’s possible to share dispositions, and in fact we all empirically do share a lot of moral dispositions.
Also seems fine on an anti-realist view. I don’t see how persuading is easier for a moral realist, surely you would still need to appeal to something that your interlocutor already believes/ values.
I would characterise antirealism as something like ‘believing that there is no normative space, and hence no logical or empirical line of reasoning you could give to change someone’s motivations.’
I would be interested to hear a counterexample that isn’t a language game. I don’t see how one can sincerely advocate someone else hold a position they think is logically indefensible.
I think this is a language game. A ‘disposition’ is not the same phenomenon that someone who believes their morality has some logical/empirical basis thinks their morality is. A disposition isn’t functionally distinct from a preference—something we can arguably share, but per Hume, something which has nothing to do do with reason.
Someone who believed in a moral realist view that valued a state whose realisation they would never experience—black ties at their own funeral, for instance—should be highly sceptical of a moral antirealist who claimed to value the same state even though they also wouldn’t experience it. The realist believes the word ‘value’ in that sentence means something motivationally relevant to a moral realist. To an antirealist it can only mean something like ‘pleasing to imagine’. But if they won’t be at the funeral, they won’t know whether the state was realised, and so they can get their pleasure just imagining it happen—it doesn’t otherwise matter to them whether it does.
Not by coincidence I think, this arguably gives the antirealist access to a basically hedonistic quasi-morality in practice (though no recourse to defend it), but not to any common alternative.
If you start with the common belief that there is such some ‘objective’ morality and some set of steps of reasoning tools that would let us access it, you can potentially correct the other’s use of those tools in good faith. If one of you doesn’t actually believe that process is even possible, it would be disingenuous to suppose there’s something even to correct.
***
FWIW, we’re spilling a lot of ink over by far the least interesting part of my initial comment. I would expect it to be more productive to talk about e.g.:
The analogy of (the irrelevance of) moral realism to (the irrelevance of) mathematical realism/Platonism
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greene’s argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophies—given the infinite possible moral philosophies and how antirealists can explain this
My suggestion that this strongly suggests a process by which some or most moral philosophies can be excluded: does this seem false? Or true, but insufficiently powered to narrow the picture down?
My suggestion that iterative self-modification of one’s motivations might converge: whether people disagree with this suggestion or agree but think the phenomenon is explicable in e.g. strictly physical terms or otherwise uninteresting
My suggestion that if we accept that motivation has its own set of axiom-like properties, we might be able to ‘derive’ quasi-moral views in the same way we can derive properties about applied maths or physics (i.e. not that they’re necessarily ‘true’ whatever that means, but that we will necessarily behave in ways that in some sense assume them to be)
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts. I don’t think it logically entails any particular normative implications at all, so I do not think it has no “normative space” or “no logical or empirical line of reasoning” you could give to change someone’s motivations.
I’m not sure what you have in mind by a language game, but you gave this as an example of something an antirealist has no access to: “Show me that their policies are more aligned with a moral view that we both in fact share.”
Why wouldn’t an antirealist have access to this? There’s a few obvious counterexamples to this. Here’s one: cultural relativism. If two people are cultural relativists and are members of the same culture, one of them could readily convince the other that a policy is more in line with the moral standards of the culture than some other policy. The same generalizes to other antirealist positions, such as various constructivist views.
Why should they be skeptical? I am a moral antirealist and I have all kinds of preferences that are totally unrelated to my own experiences. I want my daughter to go on to live a happy life long after I am dead, and I would actively sacrifice my own welfare to ensure this would be the case even if I’d never experience it. I don’t believe I do this because I’d feel happy knowing the sacrifice was made; I’d do it because I value more than just my own experiences.
I see no legitimate reason for realists to be skeptical of antirealists who have values like this. There is nothing special about valuing experiences I’d realize that prioritizes them over ones I won’t.
That isn’t what that means to me, so I do not think this is correct. If you think this is some kind of logical entailment of moral antirealism, I’d be interested in seeing an attempt at showing a contradiction were an antirealist to think otherwise, or some other means of demonstrating that this follows from antirealism.
When performing an action, my goal is to achieve the desired outcome. I don’t have to experience the outcome to be motivated to perform the action.
I don’t endorse this view, and I deny as an antirealist I’d have any need to “defend” a moral standard. This to me sounds a bit like suggesting I’d be unable to defend what my favorite color is; which is true, I just don’t think my color preferences require any sort of defense.
This still may or may not be connected to anyone’s motivations. I don’t care at all what the moral facts are. I only act based on my own preferences, and I have absolutely no desire whatsoever to do whatever is stance-independently moral.
***
I’d be happy to talk about the other claims in the sidebar as well but I’m not sure I understand some of them. Can you elaborate on these?
What is it antirealists are supposed to explain, specifically?
Also, I don’t intend to be argumentative about literally everything, but some of us may find other aspects of these topics more interesting than you, so which of these topics is most interesting can vary.
I don’t understand the difference, which is kind of the problem I identified in the first place. It’s difficult to reject the existence of a phenomenon you haven’t defined. (the concept of ignosticism applies here). ‘Moral facts’ sound to me something like ‘the truth values behind normative statements’ (though that has further definitional problems relating to both ‘truth values’ - cf my other most recent reply—and ‘normative statements’)
If you reject that definition, it might be more helpful to define moral facts by exclusion from seemingly better understood phenomena. For example, I think more practical definitions might be:
Nonphysical phenomena
Nonphysical and nonexperiential phenomena
Obviously this has the awkwardness of including some paranormal phenomena, but I don’t think that’s a huge cost. Many paranormal phenomena obviously would be physical, were they to exist (as in, they can exert force, have mass etc), and you and I can probably agree we’re not that interested in the particular nonexistences of most of the rest.
I wrote a long essay about the parameters of ‘preference’ in the context of preference utilitarianism here, which I think equally applies to supposedly nonmoral uses of the word (IIRC I might have shown it to you before?). The potted version is that people frequently use the word in a very motte-and-bailey-esque fashion, sometimes invoking quasi magical properties of preferences, other times treating them as an unremarkable part of the physical or phenomenal world. I think that’s happening here:
> {cultural relativism … daughter} examples
There’s a relatively simple experientialist account of these, which goes ‘people pursue their daughter’s/culture’s wellbeing because it gives them some form of positive valence to do so’. This is the view which I accuse of being a conflict doctrine (unless it’s paired with some kind of principled pursuit of such positive valence elsewhere).
You seem to be saying your view is not this: ‘I’d do it because I value more than just my own experiences’.
If this is true, then I think many of my criticisms don’t apply to you—but I also think this is a very selective notion of antirealism. Specifically, it requires a notion of ‘to value’, which you’re saying is *not* exclusively experiential (and presumably isn’t otherwise entirely physical too—unless you say its nonexperiential components are just a revealed preference in your behaviour?).
Perhaps you just mean a more expansive notion of experiential value than the word ‘happiness’ implies. I use the latter to mean ‘any positively valenced experience’, fwiw—I don’t think the colloquial distinction is philosophically interesting. But that puts you back in the ‘doctrine of conflict’ camp, if you aren’t able to guide someone, through dispassioned argument, to value your daughter/culture the way you do if they don’t already.
For the record, I am not claiming that a large majority of persuasion falls into the 6th/7th groups. I think it’s a tiny minority of it in fact—substiantially less than the amount which is e.g. demonstrating how to think logically or understand statistics, or persuading someone to change their mind with logic or statistical data, both of which are already miniscule.
But the difference between antirealism and exclusivism/realism is that antirealism excludes the possibility of such interactions entirely.
But you have no access to whether the outcome is achieved, only to your phenomenal experience of changing belief that it will/won’t be or has/hasn’t been. So if you don’t recognise the valence of that process of changing belief as the driver of your motivation and instead assert that some nonphysical link between your behaviour and the outcome is driving you, then under the exclusionary definition of moral facts you appear to be invoking one.
When we see a predictable pattern in the world, we generally understand it to be the result of some underlying law or laws, such that if you knew everything about the universe you could in principle predict the pattern before seeing it.
It seems basically impossible to explain the convergence towards the philosophies above by any law currently found in physical science. Evolutionary processes might drive people to protect their kin, deter aggressors etc, but there’s no need for any particular cognitive or emotional oattachment to the ‘rightness’ of this (there’s no obvious need for any emotional state at all, really, but even given that we have them they might have been entirely supervenient on behaviour, or universally tended towards cold pragmatism or whatever). And evolutionary process have no ability to explain a universally impartial philosophy like utiltiarianism, which is actively deleterious to its proponents’ survival and reproductive prospects.
So what are the underlying laws by which one could have predicted the convergence of moral philosophies, rather than just virtue signalling and similar behaviours, in particular to a set including utilitarianism?
Why wouldn’t 6 be available to an antirealist? If I’m a utilitarian and they’re a utilitarian, I could convince them a course of action would maximize utility. This would be a bit like (1): convincing them that a course of action would be consistent with their values.
If by (7) what you mean by “correct” is demonstrating that a course of action is in line with the stance-independent moral facts, an antirealist couldn’t sincerely attempt to do that, but I don’t think this carries any significant practical implications.
I don’t think I need to have better access to someone’s values to make a compelling case. For instance, suppose I’m running a store and someone breaks in with a gun and demands I empty the cash register. I don’t have to know what their values are better than they do to point out that they are on lots of security cameras, or that the police are on their way, and so on. It isn’t that hard to appeal to people’s values when convincing them. We do this all the time.
And, for what it’s worth, I think that in practice the vast majority of the time (in fact, personally, I suspect virtually all the time except for rare cases of weird philosophers) what people are doing is appealing to a person’s own values, not attempting to convince them that their values are misaligned with the stance independent moral facts.
Part of the reason for this is that I don’t think most people are moral realists, so it wouldn’t make sense for them to argue on behalf of moral realism or to appeal to others under the presumption that they are moral realists.
Another reason I think this is because I don’t think the move from convincing someone what the stance-independent moral facts are to them acting in any particular way is that straightforward. You’d have to make a separate case for motivational internalism to show that convincing them is enough to motivate them, while if you instead abandon this, it’s possible the people you’re convincing can be persuaded of what the stance-independent moral facts are, but simply not care.
Speaking for myself, arguing for moral realism would have absolutely no impact on me. I don’t simply reject moral realism. I also deny that if there were stance-independent moral facts, that I’d have any motivation to comply with them (of course, I could be wrong about that). If I’m right, and if I have accurately introspected on my own values, then merely knowing something is stance-independently wrong wouldn’t change what I do at all. I simply don’t care if something is stance-independently moral or immoral. So why would persuading me of that matter?
Whether or not antirealists must rely, in practice, any more so on threats or manipulation than moral realists is an open empirical question. I predict that they don’t. If I had to make predictions, I’d instead predict that moral realists are more likely to threaten or manipulate people to comply with whatever they take the stance-independent moral facts to be. That at least strikes me as a viable alternative hypothesis. Either way, this is an empirical question, and I don’t know of any evidence that antirealists are in a worse position than realists. As an aside: even if they were, that wouldn’t be a good reason to reject the truth of moral antirealism. Reality may simply not include stance-independent moral facts. Even if that foreclosed one mode of persuasion, well, too bad! That’s how reality is.
As an aside, I think this remark:
…carries the pragmatic implication that antirealists are more likely to be immoral people that threaten or manipulate others. Do you agree?
What exactly is the argument against antirealism? Antirealists cannot honestly appeal to stance-independent moral facts when persuading others. I agree with that. But I don’t know why that should be taken as an argument against moral antirealism.
Well, I think the 6th collapses into the first and that the 7th has no practical benefits, so I’m not persuaded this is true. I do not think we’d be better off in any way at all if moral realism is true, and I am not convinced you’ve shown that we would be.
More generally, I simply deny anything about antirealism leaves in an especially weak position to rely on threats or manipulation. Antirealists can appeal to people’s values. And I think moral realists would have to do exactly the same thing. If the person in question doesn’t care about what’s true or isn’t motivated by what’s moral, then the realist is going to be in the exact same boat as the antirealist. The only thing the realist does is saddle themselves with more steps.
This is option 1: ‘Present me with real data that shows that on my current views, it would benefit me to vote for them’. Sometimes it’s available, but usually it isn’t.
‘Too bad! That’s how reality is’ is analogous to the statement ‘too bad! That’s how morality is’ in its lack of foundation. ‘Reality’ and ‘truth’ are not available to us. What we have is a stream of valenced sensory input whose nature seems to depend somewhat on our behaviours. In general, we change our behaviour in such a way to to get better valenced sensory input, such as ‘not feeling cognitive dissonance’, ‘not being in extreme physical pain’, ‘getting the satisfaction of symbols lining up in an intuitive way’, ‘seeing our loved ones prosper’ etc.
At a ‘macroscopic’ level, this sensory input generally resolves into mental processes approximately like ‘”believing” that there are “facts”, about which our beliefs can be “right” or “wrong”’ and ‘it’s generally better to be right about stuff’, and ‘logicians, particle physicists and perhaps hedge fund managers are generally more right about stuff than religious zealots’. But this is all ultimately pragmatic. If cognitive dissonance didn’t feel good us, and if looking for consistency in the world didn’t seem to lead to nicer outcomes, we wouldn’t do it, and we wouldn’t care about rightness—and there’s no fundamental sense in which it would be correct or even meaningful to say that we were wrong.
I’m not sure this matters for the question of how reasonable we should think antirealism is—it might be a few levels less abstract than such concerns. But I don’t think it’s entirely obvious that it doesn’t, given the vagary to which I keep referring about what it would even mean for either moral pro-or-anti-realism to be correct. It might turn out that the least abstract principle we can judge it by is how we feel about its sensory consequences.
Eh, compared to who? I think most people are neither realists nor antirealists since they haven’t built the linguistic schema for either position to be expressable (I’m claiming that it’s not even possible to do so, but that’s neither here nor there). So antirealists are obviously heavily selected to be a certain type of nerd, which probably biases their population towards a generally nonviolent, relatively scrupulous and perhaps affable disposition.
But selection is different than cause, and I would guess among that nerdgroup, being utilitarian tends to cause one to be fractionally more likely to promote global utility, being contractualist tends to cause one to be fractionally more likely to uphold the social contract etc (I’m aware of the paper arguing that moral philosophers don’t seem to be particularly moral, but that was hardly robust science. And fwiw it vaguely suggests that older books—which bias heavily nonutilitarian—tempted more immorality).
The alternative is to believe that such people are all completely uninfluenced by their phenomenal experience of ‘belief’ in those philosophies, or that many of them are lying about having it (plausible, but leaves open the question of the effects of belief on behaviour of the ones who aren’t), or some othersuch surprising disjunction between their mental state and behaviour.