I would characterise antirealism as something like ‘believing that there is no normative space, and hence no logical or empirical line of reasoning you could give to change someone’s motivations.’
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts. I don’t think it logically entails any particular normative implications at all, so I do not think it has no “normative space” or “no logical or empirical line of reasoning” you could give to change someone’s motivations.
I would be interested to hear a counterexample that isn’t a language game. I don’t see how one can sincerely advocate someone else hold a position they think is logically indefensible.
I’m not sure what you have in mind by a language game, but you gave this as an example of something an antirealist has no access to: “Show me that their policies are more aligned with a moral view that we both in fact share.”
Why wouldn’t an antirealist have access to this? There’s a few obvious counterexamples to this. Here’s one: cultural relativism. If two people are cultural relativists and are members of the same culture, one of them could readily convince the other that a policy is more in line with the moral standards of the culture than some other policy. The same generalizes to other antirealist positions, such as various constructivist views.
Someone who believed in a moral realist view that valued a state whose realisation they would never experience—black ties at their own funeral, for instance—should be highly sceptical of a moral antirealist who claimed to value the same state even though they also wouldn’t experience it.
Why should they be skeptical? I am a moral antirealist and I have all kinds of preferences that are totally unrelated to my own experiences. I want my daughter to go on to live a happy life long after I am dead, and I would actively sacrifice my own welfare to ensure this would be the case even if I’d never experience it. I don’t believe I do this because I’d feel happy knowing the sacrifice was made; I’d do it because I value more than just my own experiences.
I see no legitimate reason for realists to be skeptical of antirealists who have values like this. There is nothing special about valuing experiences I’d realize that prioritizes them over ones I won’t.
To an antirealist it can only mean something like ‘pleasing to imagine’.
That isn’t what that means to me, so I do not think this is correct. If you think this is some kind of logical entailment of moral antirealism, I’d be interested in seeing an attempt at showing a contradiction were an antirealist to think otherwise, or some other means of demonstrating that this follows from antirealism.
But if they won’t be at the funeral, they won’t know whether the state was realised, and so they can get their pleasure just imagining it happen—it doesn’t otherwise matter to them whether it does.
When performing an action, my goal is to achieve the desired outcome. I don’t have to experience the outcome to be motivated to perform the action.
Not by coincidence I think, this arguably gives the antirealist access to a basically hedonistic quasi-morality in practice (though no recourse to defend it), but not to any common alternative.
I don’t endorse this view, and I deny as an antirealist I’d have any need to “defend” a moral standard. This to me sounds a bit like suggesting I’d be unable to defend what my favorite color is; which is true, I just don’t think my color preferences require any sort of defense.
If you start with the common belief that there is such some ‘objective’ morality and some set of steps of reasoning tools that would let us access it, you can potentially correct the other’s use of those tools in good faith. If one of you doesn’t actually believe that process is even possible, it would be disingenuous to suppose there’s something even to correct.
This still may or may not be connected to anyone’s motivations. I don’t care at all what the moral facts are. I only act based on my own preferences, and I have absolutely no desire whatsoever to do whatever is stance-independently moral.
***
I’d be happy to talk about the other claims in the sidebar as well but I’m not sure I understand some of them. Can you elaborate on these?
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greene’s argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophies—given the infinite possible moral philosophies and how antirealists can explain this
What is it antirealists are supposed to explain, specifically?
Also, I don’t intend to be argumentative about literally everything, but some of us may find other aspects of these topics more interesting than you, so which of these topics is most interesting can vary.
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts.
I don’t understand the difference, which is kind of the problem I identified in the first place. It’s difficult to reject the existence of a phenomenon you haven’t defined. (the concept of ignosticism applies here). ‘Moral facts’ sound to me something like ‘the truth values behind normative statements’ (though that has further definitional problems relating to both ‘truth values’ - cf my other most recent reply—and ‘normative statements’)
If you reject that definition, it might be more helpful to define moral facts by exclusion from seemingly better understood phenomena. For example, I think more practical definitions might be:
Nonphysical phenomena
Nonphysical and nonexperiential phenomena
Obviously this has the awkwardness of including some paranormal phenomena, but I don’t think that’s a huge cost. Many paranormal phenomena obviously would be physical, were they to exist (as in, they can exert force, have mass etc), and you and I can probably agree we’re not that interested in the particular nonexistences of most of the rest.
I have all kinds of preferences that are totally unrelated to my own experiences
I wrote a long essay about the parameters of ‘preference’ in the context of preference utilitarianism here, which I think equally applies to supposedly nonmoral uses of the word (IIRC I might have shown it to you before?). The potted version is that people frequently use the word in a very motte-and-bailey-esque fashion, sometimes invoking quasi magical properties of preferences, other times treating them as an unremarkable part of the physical or phenomenal world. I think that’s happening here:
> {cultural relativism … daughter} examples
There’s a relatively simple experientialist account of these, which goes ‘people pursue their daughter’s/culture’s wellbeing because it gives them some form of positive valence to do so’. This is the view which I accuse of being a conflict doctrine (unless it’s paired with some kind of principled pursuit of such positive valence elsewhere).
You seem to be saying your view is not this: ‘I’d do it because I value more than just my own experiences’.
If this is true, then I think many of my criticisms don’t apply to you—but I also think this is a very selective notion of antirealism. Specifically, it requires a notion of ‘to value’, which you’re saying is *not* exclusively experiential (and presumably isn’t otherwise entirely physical too—unless you say its nonexperiential components are just a revealed preference in your behaviour?).
Perhaps you just mean a more expansive notion of experiential value than the word ‘happiness’ implies. I use the latter to mean ‘any positively valenced experience’, fwiw—I don’t think the colloquial distinction is philosophically interesting. But that puts you back in the ‘doctrine of conflict’ camp, if you aren’t able to guide someone, through dispassioned argument, to value your daughter/culture the way you do if they don’t already.
For the record, I am not claiming that a large majority of persuasion falls into the 6th/7th groups. I think it’s a tiny minority of it in fact—substiantially less than the amount which is e.g. demonstrating how to think logically or understand statistics, or persuading someone to change their mind with logic or statistical data, both of which are already miniscule.
But the difference between antirealism and exclusivism/realism is that antirealism excludes the possibility of such interactions entirely.
When performing an action, my goal is to achieve the desired outcome. I don’t have to experience the outcome to be motivated to perform the action.
But you have no access to whether the outcome is achieved, only to your phenomenal experience of changing belief that it will/won’t be or has/hasn’t been. So if you don’t recognise the valence of that process of changing belief as the driver of your motivation and instead assert that some nonphysical link between your behaviour and the outcome is driving you, then under the exclusionary definition of moral facts you appear to be invoking one.
Can you elaborate on these?
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greene’s argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophies—given the infinite possible moral philosophies and how antirealists can explain this
What is it antirealists are supposed to explain, specifically?
When we see a predictable pattern in the world, we generally understand it to be the result of some underlying law or laws, such that if you knew everything about the universe you could in principle predict the pattern before seeing it.
It seems basically impossible to explain the convergence towards the philosophies above by any law currently found in physical science. Evolutionary processes might drive people to protect their kin, deter aggressors etc, but there’s no need for any particular cognitive or emotional oattachment to the ‘rightness’ of this (there’s no obvious need for any emotional state at all, really, but even given that we have them they might have been entirely supervenient on behaviour, or universally tended towards cold pragmatism or whatever). And evolutionary process have no ability to explain a universally impartial philosophy like utiltiarianism, which is actively deleterious to its proponents’ survival and reproductive prospects.
So what are the underlying laws by which one could have predicted the convergence of moral philosophies, rather than just virtue signalling and similar behaviours, in particular to a set including utilitarianism?
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts. I don’t think it logically entails any particular normative implications at all, so I do not think it has no “normative space” or “no logical or empirical line of reasoning” you could give to change someone’s motivations.
I’m not sure what you have in mind by a language game, but you gave this as an example of something an antirealist has no access to: “Show me that their policies are more aligned with a moral view that we both in fact share.”
Why wouldn’t an antirealist have access to this? There’s a few obvious counterexamples to this. Here’s one: cultural relativism. If two people are cultural relativists and are members of the same culture, one of them could readily convince the other that a policy is more in line with the moral standards of the culture than some other policy. The same generalizes to other antirealist positions, such as various constructivist views.
Why should they be skeptical? I am a moral antirealist and I have all kinds of preferences that are totally unrelated to my own experiences. I want my daughter to go on to live a happy life long after I am dead, and I would actively sacrifice my own welfare to ensure this would be the case even if I’d never experience it. I don’t believe I do this because I’d feel happy knowing the sacrifice was made; I’d do it because I value more than just my own experiences.
I see no legitimate reason for realists to be skeptical of antirealists who have values like this. There is nothing special about valuing experiences I’d realize that prioritizes them over ones I won’t.
That isn’t what that means to me, so I do not think this is correct. If you think this is some kind of logical entailment of moral antirealism, I’d be interested in seeing an attempt at showing a contradiction were an antirealist to think otherwise, or some other means of demonstrating that this follows from antirealism.
When performing an action, my goal is to achieve the desired outcome. I don’t have to experience the outcome to be motivated to perform the action.
I don’t endorse this view, and I deny as an antirealist I’d have any need to “defend” a moral standard. This to me sounds a bit like suggesting I’d be unable to defend what my favorite color is; which is true, I just don’t think my color preferences require any sort of defense.
This still may or may not be connected to anyone’s motivations. I don’t care at all what the moral facts are. I only act based on my own preferences, and I have absolutely no desire whatsoever to do whatever is stance-independently moral.
***
I’d be happy to talk about the other claims in the sidebar as well but I’m not sure I understand some of them. Can you elaborate on these?
What is it antirealists are supposed to explain, specifically?
Also, I don’t intend to be argumentative about literally everything, but some of us may find other aspects of these topics more interesting than you, so which of these topics is most interesting can vary.
I don’t understand the difference, which is kind of the problem I identified in the first place. It’s difficult to reject the existence of a phenomenon you haven’t defined. (the concept of ignosticism applies here). ‘Moral facts’ sound to me something like ‘the truth values behind normative statements’ (though that has further definitional problems relating to both ‘truth values’ - cf my other most recent reply—and ‘normative statements’)
If you reject that definition, it might be more helpful to define moral facts by exclusion from seemingly better understood phenomena. For example, I think more practical definitions might be:
Nonphysical phenomena
Nonphysical and nonexperiential phenomena
Obviously this has the awkwardness of including some paranormal phenomena, but I don’t think that’s a huge cost. Many paranormal phenomena obviously would be physical, were they to exist (as in, they can exert force, have mass etc), and you and I can probably agree we’re not that interested in the particular nonexistences of most of the rest.
I wrote a long essay about the parameters of ‘preference’ in the context of preference utilitarianism here, which I think equally applies to supposedly nonmoral uses of the word (IIRC I might have shown it to you before?). The potted version is that people frequently use the word in a very motte-and-bailey-esque fashion, sometimes invoking quasi magical properties of preferences, other times treating them as an unremarkable part of the physical or phenomenal world. I think that’s happening here:
> {cultural relativism … daughter} examples
There’s a relatively simple experientialist account of these, which goes ‘people pursue their daughter’s/culture’s wellbeing because it gives them some form of positive valence to do so’. This is the view which I accuse of being a conflict doctrine (unless it’s paired with some kind of principled pursuit of such positive valence elsewhere).
You seem to be saying your view is not this: ‘I’d do it because I value more than just my own experiences’.
If this is true, then I think many of my criticisms don’t apply to you—but I also think this is a very selective notion of antirealism. Specifically, it requires a notion of ‘to value’, which you’re saying is *not* exclusively experiential (and presumably isn’t otherwise entirely physical too—unless you say its nonexperiential components are just a revealed preference in your behaviour?).
Perhaps you just mean a more expansive notion of experiential value than the word ‘happiness’ implies. I use the latter to mean ‘any positively valenced experience’, fwiw—I don’t think the colloquial distinction is philosophically interesting. But that puts you back in the ‘doctrine of conflict’ camp, if you aren’t able to guide someone, through dispassioned argument, to value your daughter/culture the way you do if they don’t already.
For the record, I am not claiming that a large majority of persuasion falls into the 6th/7th groups. I think it’s a tiny minority of it in fact—substiantially less than the amount which is e.g. demonstrating how to think logically or understand statistics, or persuading someone to change their mind with logic or statistical data, both of which are already miniscule.
But the difference between antirealism and exclusivism/realism is that antirealism excludes the possibility of such interactions entirely.
But you have no access to whether the outcome is achieved, only to your phenomenal experience of changing belief that it will/won’t be or has/hasn’t been. So if you don’t recognise the valence of that process of changing belief as the driver of your motivation and instead assert that some nonphysical link between your behaviour and the outcome is driving you, then under the exclusionary definition of moral facts you appear to be invoking one.
When we see a predictable pattern in the world, we generally understand it to be the result of some underlying law or laws, such that if you knew everything about the universe you could in principle predict the pattern before seeing it.
It seems basically impossible to explain the convergence towards the philosophies above by any law currently found in physical science. Evolutionary processes might drive people to protect their kin, deter aggressors etc, but there’s no need for any particular cognitive or emotional oattachment to the ‘rightness’ of this (there’s no obvious need for any emotional state at all, really, but even given that we have them they might have been entirely supervenient on behaviour, or universally tended towards cold pragmatism or whatever). And evolutionary process have no ability to explain a universally impartial philosophy like utiltiarianism, which is actively deleterious to its proponents’ survival and reproductive prospects.
So what are the underlying laws by which one could have predicted the convergence of moral philosophies, rather than just virtue signalling and similar behaviours, in particular to a set including utilitarianism?