Iâm not sure how either of the last two are harder to explain on an anti-realist view than a moral realist view. I donât think anti-realists would accept that they arenât possible on their view.
Show me that their policies are more aligned with a moral view that we both in fact share
This makes sense on a lot of anti-realist views. Anti-realists donât think that people donât have dispositions that are well described as moral. Itâs possible to share dispositions, and in fact we all empirically do share a lot of moral dispositions.
Persuade me to accept whatever (you think) is the âcorrectâ moral view, and show me that their policies are aligned with it
Also seems fine on an anti-realist view. I donât see how persuading is easier for a moral realist, surely you would still need to appeal to something that your interlocutor already believes/â values.
I would characterise antirealism as something like âbelieving that there is no normative space, and hence no logical or empirical line of reasoning you could give to change someoneâs motivations.â
I donât think anti-realists would accept that they arenât possible on their view.
I would be interested to hear a counterexample that isnât a language game. I donât see how one can sincerely advocate someone else hold a position they think is logically indefensible.
Anti-realists donât think that people donât have dispositions that are well described as moral. Itâs possible to share dispositions, and in fact we all empirically do share a lot of moral dispositions.
I think this is a language game. A âdispositionâ is not the same phenomenon that someone who believes their morality has some logical/âempirical basis thinks their morality is. A disposition isnât functionally distinct from a preferenceâsomething we can arguably share, but per Hume, something which has nothing to do do with reason.
Someone who believed in a moral realist view that valued a state whose realisation they would never experienceâblack ties at their own funeral, for instanceâshould be highly sceptical of a moral antirealist who claimed to value the same state even though they also wouldnât experience it. The realist believes the word âvalueâ in that sentence means something motivationally relevant to a moral realist. To an antirealist it can only mean something like âpleasing to imagineâ. But if they wonât be at the funeral, they wonât know whether the state was realised, and so they can get their pleasure just imagining it happenâit doesnât otherwise matter to them whether it does.
Not by coincidence I think, this arguably gives the antirealist access to a basically hedonistic quasi-morality in practice (though no recourse to defend it), but not to any common alternative.
I donât see how persuading is easier for a moral realist, surely you would still need to appeal to something that your interlocutor already believes/â values.
If you start with the common belief that there is such some âobjectiveâ morality and some set of steps of reasoning tools that would let us access it, you can potentially correct the otherâs use of those tools in good faith. If one of you doesnât actually believe that process is even possible, it would be disingenuous to suppose thereâs something even to correct.
***
FWIW, weâre spilling a lot of ink over by far the least interesting part of my initial comment. I would expect it to be more productive to talk about e.g.:
The analogy of (the irrelevance of) moral realism to (the irrelevance of) mathematical realism/âPlatonism
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greeneâs argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophiesâgiven the infinite possible moral philosophies and how antirealists can explain this
My suggestion that this strongly suggests a process by which some or most moral philosophies can be excluded: does this seem false? Or true, but insufficiently powered to narrow the picture down?
My suggestion that iterative self-modification of oneâs motivations might converge: whether people disagree with this suggestion or agree but think the phenomenon is explicable in e.g. strictly physical terms or otherwise uninteresting
My suggestion that if we accept that motivation has its own set of axiom-like properties, we might be able to âderiveâ quasi-moral views in the same way we can derive properties about applied maths or physics (i.e. not that theyâre necessarily âtrueâ whatever that means, but that we will necessarily behave in ways that in some sense assume them to be)
I would characterise antirealism as something like âbelieving that there is no normative space, and hence no logical or empirical line of reasoning you could give to change someoneâs motivations.â
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts. I donât think it logically entails any particular normative implications at all, so I do not think it has no ânormative spaceâ or âno logical or empirical line of reasoningâ you could give to change someoneâs motivations.
I would be interested to hear a counterexample that isnât a language game. I donât see how one can sincerely advocate someone else hold a position they think is logically indefensible.
Iâm not sure what you have in mind by a language game, but you gave this as an example of something an antirealist has no access to: âShow me that their policies are more aligned with a moral view that we both in fact share.â
Why wouldnât an antirealist have access to this? Thereâs a few obvious counterexamples to this. Hereâs one: cultural relativism. If two people are cultural relativists and are members of the same culture, one of them could readily convince the other that a policy is more in line with the moral standards of the culture than some other policy. The same generalizes to other antirealist positions, such as various constructivist views.
Someone who believed in a moral realist view that valued a state whose realisation they would never experienceâblack ties at their own funeral, for instanceâshould be highly sceptical of a moral antirealist who claimed to value the same state even though they also wouldnât experience it.
Why should they be skeptical? I am a moral antirealist and I have all kinds of preferences that are totally unrelated to my own experiences. I want my daughter to go on to live a happy life long after I am dead, and I would actively sacrifice my own welfare to ensure this would be the case even if Iâd never experience it. I donât believe I do this because Iâd feel happy knowing the sacrifice was made; Iâd do it because I value more than just my own experiences.
I see no legitimate reason for realists to be skeptical of antirealists who have values like this. There is nothing special about valuing experiences Iâd realize that prioritizes them over ones I wonât.
To an antirealist it can only mean something like âpleasing to imagineâ.
That isnât what that means to me, so I do not think this is correct. If you think this is some kind of logical entailment of moral antirealism, Iâd be interested in seeing an attempt at showing a contradiction were an antirealist to think otherwise, or some other means of demonstrating that this follows from antirealism.
But if they wonât be at the funeral, they wonât know whether the state was realised, and so they can get their pleasure just imagining it happenâit doesnât otherwise matter to them whether it does.
When performing an action, my goal is to achieve the desired outcome. I donât have to experience the outcome to be motivated to perform the action.
Not by coincidence I think, this arguably gives the antirealist access to a basically hedonistic quasi-morality in practice (though no recourse to defend it), but not to any common alternative.
I donât endorse this view, and I deny as an antirealist Iâd have any need to âdefendâ a moral standard. This to me sounds a bit like suggesting Iâd be unable to defend what my favorite color is; which is true, I just donât think my color preferences require any sort of defense.
If you start with the common belief that there is such some âobjectiveâ morality and some set of steps of reasoning tools that would let us access it, you can potentially correct the otherâs use of those tools in good faith. If one of you doesnât actually believe that process is even possible, it would be disingenuous to suppose thereâs something even to correct.
This still may or may not be connected to anyoneâs motivations. I donât care at all what the moral facts are. I only act based on my own preferences, and I have absolutely no desire whatsoever to do whatever is stance-independently moral.
***
Iâd be happy to talk about the other claims in the sidebar as well but Iâm not sure I understand some of them. Can you elaborate on these?
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greeneâs argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophiesâgiven the infinite possible moral philosophies and how antirealists can explain this
What is it antirealists are supposed to explain, specifically?
Also, I donât intend to be argumentative about literally everything, but some of us may find other aspects of these topics more interesting than you, so which of these topics is most interesting can vary.
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts.
I donât understand the difference, which is kind of the problem I identified in the first place. Itâs difficult to reject the existence of a phenomenon you havenât defined. (the concept of ignosticism applies here). âMoral factsâ sound to me something like âthe truth values behind normative statementsâ (though that has further definitional problems relating to both âtruth valuesâ - cf my other most recent replyâand ânormative statementsâ)
If you reject that definition, it might be more helpful to define moral facts by exclusion from seemingly better understood phenomena. For example, I think more practical definitions might be:
Nonphysical phenomena
Nonphysical and nonexperiential phenomena
Obviously this has the awkwardness of including some paranormal phenomena, but I donât think thatâs a huge cost. Many paranormal phenomena obviously would be physical, were they to exist (as in, they can exert force, have mass etc), and you and I can probably agree weâre not that interested in the particular nonexistences of most of the rest.
I have all kinds of preferences that are totally unrelated to my own experiences
I wrote a long essay about the parameters of âpreferenceâ in the context of preference utilitarianism here, which I think equally applies to supposedly nonmoral uses of the word (IIRC I might have shown it to you before?). The potted version is that people frequently use the word in a very motte-and-bailey-esque fashion, sometimes invoking quasi magical properties of preferences, other times treating them as an unremarkable part of the physical or phenomenal world. I think thatâs happening here:
> {cultural relativism ⊠daughter} examples
Thereâs a relatively simple experientialist account of these, which goes âpeople pursue their daughterâs/âcultureâs wellbeing because it gives them some form of positive valence to do soâ. This is the view which I accuse of being a conflict doctrine (unless itâs paired with some kind of principled pursuit of such positive valence elsewhere).
You seem to be saying your view is not this: âIâd do it because I value more than just my own experiencesâ.
If this is true, then I think many of my criticisms donât apply to youâbut I also think this is a very selective notion of antirealism. Specifically, it requires a notion of âto valueâ, which youâre saying is *not* exclusively experiential (and presumably isnât otherwise entirely physical tooâunless you say its nonexperiential components are just a revealed preference in your behaviour?).
Perhaps you just mean a more expansive notion of experiential value than the word âhappinessâ implies. I use the latter to mean âany positively valenced experienceâ, fwiwâI donât think the colloquial distinction is philosophically interesting. But that puts you back in the âdoctrine of conflictâ camp, if you arenât able to guide someone, through dispassioned argument, to value your daughter/âculture the way you do if they donât already.
For the record, I am not claiming that a large majority of persuasion falls into the 6th/â7th groups. I think itâs a tiny minority of it in factâsubstiantially less than the amount which is e.g. demonstrating how to think logically or understand statistics, or persuading someone to change their mind with logic or statistical data, both of which are already miniscule.
But the difference between antirealism and exclusivism/ârealism is that antirealism excludes the possibility of such interactions entirely.
When performing an action, my goal is to achieve the desired outcome. I donât have to experience the outcome to be motivated to perform the action.
But you have no access to whether the outcome is achieved, only to your phenomenal experience of changing belief that it will/âwonât be or has/âhasnât been. So if you donât recognise the valence of that process of changing belief as the driver of your motivation and instead assert that some nonphysical link between your behaviour and the outcome is driving you, then under the exclusionary definition of moral facts you appear to be invoking one.
Can you elaborate on these?
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greeneâs argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophiesâgiven the infinite possible moral philosophies and how antirealists can explain this
What is it antirealists are supposed to explain, specifically?
When we see a predictable pattern in the world, we generally understand it to be the result of some underlying law or laws, such that if you knew everything about the universe you could in principle predict the pattern before seeing it.
It seems basically impossible to explain the convergence towards the philosophies above by any law currently found in physical science. Evolutionary processes might drive people to protect their kin, deter aggressors etc, but thereâs no need for any particular cognitive or emotional oattachment to the ârightnessâ of this (thereâs no obvious need for any emotional state at all, really, but even given that we have them they might have been entirely supervenient on behaviour, or universally tended towards cold pragmatism or whatever). And evolutionary process have no ability to explain a universally impartial philosophy like utiltiarianism, which is actively deleterious to its proponentsâ survival and reproductive prospects.
So what are the underlying laws by which one could have predicted the convergence of moral philosophies, rather than just virtue signalling and similar behaviours, in particular to a set including utilitarianism?
Iâm not sure how either of the last two are harder to explain on an anti-realist view than a moral realist view. I donât think anti-realists would accept that they arenât possible on their view.
This makes sense on a lot of anti-realist views. Anti-realists donât think that people donât have dispositions that are well described as moral. Itâs possible to share dispositions, and in fact we all empirically do share a lot of moral dispositions.
Also seems fine on an anti-realist view. I donât see how persuading is easier for a moral realist, surely you would still need to appeal to something that your interlocutor already believes/â values.
I would characterise antirealism as something like âbelieving that there is no normative space, and hence no logical or empirical line of reasoning you could give to change someoneâs motivations.â
I would be interested to hear a counterexample that isnât a language game. I donât see how one can sincerely advocate someone else hold a position they think is logically indefensible.
I think this is a language game. A âdispositionâ is not the same phenomenon that someone who believes their morality has some logical/âempirical basis thinks their morality is. A disposition isnât functionally distinct from a preferenceâsomething we can arguably share, but per Hume, something which has nothing to do do with reason.
Someone who believed in a moral realist view that valued a state whose realisation they would never experienceâblack ties at their own funeral, for instanceâshould be highly sceptical of a moral antirealist who claimed to value the same state even though they also wouldnât experience it. The realist believes the word âvalueâ in that sentence means something motivationally relevant to a moral realist. To an antirealist it can only mean something like âpleasing to imagineâ. But if they wonât be at the funeral, they wonât know whether the state was realised, and so they can get their pleasure just imagining it happenâit doesnât otherwise matter to them whether it does.
Not by coincidence I think, this arguably gives the antirealist access to a basically hedonistic quasi-morality in practice (though no recourse to defend it), but not to any common alternative.
If you start with the common belief that there is such some âobjectiveâ morality and some set of steps of reasoning tools that would let us access it, you can potentially correct the otherâs use of those tools in good faith. If one of you doesnât actually believe that process is even possible, it would be disingenuous to suppose thereâs something even to correct.
***
FWIW, weâre spilling a lot of ink over by far the least interesting part of my initial comment. I would expect it to be more productive to talk about e.g.:
The analogy of (the irrelevance of) moral realism to (the irrelevance of) mathematical realism/âPlatonism
The uniquely non-evolutionarily explicability of utilitarianism (h.t. Joshua Greeneâs argument in Moral Tribes) and how antirealists can explain this
The convergence of moral philosophers towards three heavily overlapping moral philosophiesâgiven the infinite possible moral philosophies and how antirealists can explain this
My suggestion that this strongly suggests a process by which some or most moral philosophies can be excluded: does this seem false? Or true, but insufficiently powered to narrow the picture down?
My suggestion that iterative self-modification of oneâs motivations might converge: whether people disagree with this suggestion or agree but think the phenomenon is explicable in e.g. strictly physical terms or otherwise uninteresting
My suggestion that if we accept that motivation has its own set of axiom-like properties, we might be able to âderiveâ quasi-moral views in the same way we can derive properties about applied maths or physics (i.e. not that theyâre necessarily âtrueâ whatever that means, but that we will necessarily behave in ways that in some sense assume them to be)
I would not accept this characterization. Antirealism is the view that there are no stance-independent moral facts. I donât think it logically entails any particular normative implications at all, so I do not think it has no ânormative spaceâ or âno logical or empirical line of reasoningâ you could give to change someoneâs motivations.
Iâm not sure what you have in mind by a language game, but you gave this as an example of something an antirealist has no access to: âShow me that their policies are more aligned with a moral view that we both in fact share.â
Why wouldnât an antirealist have access to this? Thereâs a few obvious counterexamples to this. Hereâs one: cultural relativism. If two people are cultural relativists and are members of the same culture, one of them could readily convince the other that a policy is more in line with the moral standards of the culture than some other policy. The same generalizes to other antirealist positions, such as various constructivist views.
Why should they be skeptical? I am a moral antirealist and I have all kinds of preferences that are totally unrelated to my own experiences. I want my daughter to go on to live a happy life long after I am dead, and I would actively sacrifice my own welfare to ensure this would be the case even if Iâd never experience it. I donât believe I do this because Iâd feel happy knowing the sacrifice was made; Iâd do it because I value more than just my own experiences.
I see no legitimate reason for realists to be skeptical of antirealists who have values like this. There is nothing special about valuing experiences Iâd realize that prioritizes them over ones I wonât.
That isnât what that means to me, so I do not think this is correct. If you think this is some kind of logical entailment of moral antirealism, Iâd be interested in seeing an attempt at showing a contradiction were an antirealist to think otherwise, or some other means of demonstrating that this follows from antirealism.
When performing an action, my goal is to achieve the desired outcome. I donât have to experience the outcome to be motivated to perform the action.
I donât endorse this view, and I deny as an antirealist Iâd have any need to âdefendâ a moral standard. This to me sounds a bit like suggesting Iâd be unable to defend what my favorite color is; which is true, I just donât think my color preferences require any sort of defense.
This still may or may not be connected to anyoneâs motivations. I donât care at all what the moral facts are. I only act based on my own preferences, and I have absolutely no desire whatsoever to do whatever is stance-independently moral.
***
Iâd be happy to talk about the other claims in the sidebar as well but Iâm not sure I understand some of them. Can you elaborate on these?
What is it antirealists are supposed to explain, specifically?
Also, I donât intend to be argumentative about literally everything, but some of us may find other aspects of these topics more interesting than you, so which of these topics is most interesting can vary.
I donât understand the difference, which is kind of the problem I identified in the first place. Itâs difficult to reject the existence of a phenomenon you havenât defined. (the concept of ignosticism applies here). âMoral factsâ sound to me something like âthe truth values behind normative statementsâ (though that has further definitional problems relating to both âtruth valuesâ - cf my other most recent replyâand ânormative statementsâ)
If you reject that definition, it might be more helpful to define moral facts by exclusion from seemingly better understood phenomena. For example, I think more practical definitions might be:
Nonphysical phenomena
Nonphysical and nonexperiential phenomena
Obviously this has the awkwardness of including some paranormal phenomena, but I donât think thatâs a huge cost. Many paranormal phenomena obviously would be physical, were they to exist (as in, they can exert force, have mass etc), and you and I can probably agree weâre not that interested in the particular nonexistences of most of the rest.
I wrote a long essay about the parameters of âpreferenceâ in the context of preference utilitarianism here, which I think equally applies to supposedly nonmoral uses of the word (IIRC I might have shown it to you before?). The potted version is that people frequently use the word in a very motte-and-bailey-esque fashion, sometimes invoking quasi magical properties of preferences, other times treating them as an unremarkable part of the physical or phenomenal world. I think thatâs happening here:
> {cultural relativism ⊠daughter} examples
Thereâs a relatively simple experientialist account of these, which goes âpeople pursue their daughterâs/âcultureâs wellbeing because it gives them some form of positive valence to do soâ. This is the view which I accuse of being a conflict doctrine (unless itâs paired with some kind of principled pursuit of such positive valence elsewhere).
You seem to be saying your view is not this: âIâd do it because I value more than just my own experiencesâ.
If this is true, then I think many of my criticisms donât apply to youâbut I also think this is a very selective notion of antirealism. Specifically, it requires a notion of âto valueâ, which youâre saying is *not* exclusively experiential (and presumably isnât otherwise entirely physical tooâunless you say its nonexperiential components are just a revealed preference in your behaviour?).
Perhaps you just mean a more expansive notion of experiential value than the word âhappinessâ implies. I use the latter to mean âany positively valenced experienceâ, fwiwâI donât think the colloquial distinction is philosophically interesting. But that puts you back in the âdoctrine of conflictâ camp, if you arenât able to guide someone, through dispassioned argument, to value your daughter/âculture the way you do if they donât already.
For the record, I am not claiming that a large majority of persuasion falls into the 6th/â7th groups. I think itâs a tiny minority of it in factâsubstiantially less than the amount which is e.g. demonstrating how to think logically or understand statistics, or persuading someone to change their mind with logic or statistical data, both of which are already miniscule.
But the difference between antirealism and exclusivism/ârealism is that antirealism excludes the possibility of such interactions entirely.
But you have no access to whether the outcome is achieved, only to your phenomenal experience of changing belief that it will/âwonât be or has/âhasnât been. So if you donât recognise the valence of that process of changing belief as the driver of your motivation and instead assert that some nonphysical link between your behaviour and the outcome is driving you, then under the exclusionary definition of moral facts you appear to be invoking one.
When we see a predictable pattern in the world, we generally understand it to be the result of some underlying law or laws, such that if you knew everything about the universe you could in principle predict the pattern before seeing it.
It seems basically impossible to explain the convergence towards the philosophies above by any law currently found in physical science. Evolutionary processes might drive people to protect their kin, deter aggressors etc, but thereâs no need for any particular cognitive or emotional oattachment to the ârightnessâ of this (thereâs no obvious need for any emotional state at all, really, but even given that we have them they might have been entirely supervenient on behaviour, or universally tended towards cold pragmatism or whatever). And evolutionary process have no ability to explain a universally impartial philosophy like utiltiarianism, which is actively deleterious to its proponentsâ survival and reproductive prospects.
So what are the underlying laws by which one could have predicted the convergence of moral philosophies, rather than just virtue signalling and similar behaviours, in particular to a set including utilitarianism?