Why wouldn’t 6 be available to an antirealist? If I’m a utilitarian and they’re a utilitarian, I could convince them a course of action would maximize utility. This would be a bit like (1): convincing them that a course of action would be consistent with their values.
If by (7) what you mean by “correct” is demonstrating that a course of action is in line with the stance-independent moral facts, an antirealist couldn’t sincerely attempt to do that, but I don’t think this carries any significant practical implications.
And without those, the only honest/nonviolent option you have to persuade me is not going to be available to you the majority of the time, since usually I’m going to be better informed than you about what things are in fact good for me.
I don’t think I need to have better access to someone’s values to make a compelling case. For instance, suppose I’m running a store and someone breaks in with a gun and demands I empty the cash register. I don’t have to know what their values are better than they do to point out that they are on lots of security cameras, or that the police are on their way, and so on. It isn’t that hard to appeal to people’s values when convincing them. We do this all the time.
And, for what it’s worth, I think that in practice the vast majority of the time (in fact, personally, I suspect virtually all the time except for rare cases of weird philosophers) what people are doing is appealing to a person’s own values, not attempting to convince them that their values are misaligned with the stance independent moral facts.
Part of the reason for this is that I don’t think most people are moral realists, so it wouldn’t make sense for them to argue on behalf of moral realism or to appeal to others under the presumption that they are moral realists.
Another reason I think this is because I don’t think the move from convincing someone what the stance-independent moral facts are to them acting in any particular way is that straightforward. You’d have to make a separate case for motivational internalism to show that convincing them is enough to motivate them, while if you instead abandon this, it’s possible the people you’re convincing can be persuaded of what the stance-independent moral facts are, but simply not care.
Speaking for myself, arguing for moral realism would have absolutely no impact on me. I don’t simply reject moral realism. I also deny that if there were stance-independent moral facts, that I’d have any motivation to comply with them (of course, I could be wrong about that). If I’m right, and if I have accurately introspected on my own values, then merely knowing something is stance-independently wrong wouldn’t change what I do at all. I simply don’t care if something is stance-independently moral or immoral. So why would persuading me of that matter?
Whether or not antirealists must rely, in practice, any more so on threats or manipulation than moral realists is an open empirical question. I predict that they don’t. If I had to make predictions, I’d instead predict that moral realists are more likely to threaten or manipulate people to comply with whatever they take the stance-independent moral facts to be. That at least strikes me as a viable alternative hypothesis. Either way, this is an empirical question, and I don’t know of any evidence that antirealists are in a worse position than realists. As an aside: even if they were, that wouldn’t be a good reason to reject the truth of moral antirealism. Reality may simply not include stance-independent moral facts. Even if that foreclosed one mode of persuasion, well, too bad! That’s how reality is.
As an aside, I think this remark:
This isn’t to say that moral antirealists necessarily will manipulate/threaten etc—I know many antirealists who seem like ‘good’ people who would find manipulating other people for personal gain grossly unpleasant.
…carries the pragmatic implication that antirealists are more likely to be immoral people that threaten or manipulate others. Do you agree?
This isn’t supposed to be a substantial argument for moral realism, but I think it’s an argument against antirealism.
What exactly is the argument against antirealism? Antirealists cannot honestly appeal to stance-independent moral facts when persuading others. I agree with that. But I don’t know why that should be taken as an argument against moral antirealism.
As an antirealist it would nonetheless be far better for you to live in a world where the 6th and 7th options were possible.
Well, I think the 6th collapses into the first and that the 7th has no practical benefits, so I’m not persuaded this is true. I do not think we’d be better off in any way at all if moral realism is true, and I am not convinced you’ve shown that we would be.
More generally, I simply deny anything about antirealism leaves in an especially weak position to rely on threats or manipulation. Antirealists can appeal to people’s values. And I think moral realists would have to do exactly the same thing. If the person in question doesn’t care about what’s true or isn’t motivated by what’s moral, then the realist is going to be in the exact same boat as the antirealist. The only thing the realist does is saddle themselves with more steps.
I don’t think I need to have better access to someone’s values to make a compelling case. For instance, suppose I’m running a store and someone breaks in with a gun and demands I empty the cash register. I don’t have to know what their values are better than they do to point out that they are on lots of security cameras, or that the police are on their way, and so on. It isn’t that hard to appeal to people’s values when convincing them. We do this all the time.
This is option 1: ‘Present me with real data that shows that on my current views, it would benefit me to vote for them’. Sometimes it’s available, but usually it isn’t.
Even if that foreclosed one mode of persuasion, well, too bad! That’s how reality is.
‘Too bad! That’s how reality is’ is analogous to the statement ‘too bad! That’s how morality is’ in its lack of foundation. ‘Reality’ and ‘truth’ are not available to us. What we have is a stream of valenced sensory input whose nature seems to depend somewhat on our behaviours. In general, we change our behaviour in such a way to to get better valenced sensory input, such as ‘not feeling cognitive dissonance’, ‘not being in extreme physical pain’, ‘getting the satisfaction of symbols lining up in an intuitive way’, ‘seeing our loved ones prosper’ etc.
At a ‘macroscopic’ level, this sensory input generally resolves into mental processes approximately like ‘”believing” that there are “facts”, about which our beliefs can be “right” or “wrong”’ and ‘it’s generally better to be right about stuff’, and ‘logicians, particle physicists and perhaps hedge fund managers are generally more right about stuff than religious zealots’. But this is all ultimately pragmatic. If cognitive dissonance didn’t feel good us, and if looking for consistency in the world didn’t seem to lead to nicer outcomes, we wouldn’t do it, and we wouldn’t care about rightness—and there’s no fundamental sense in which it would be correct or even meaningful to say that we were wrong.
I’m not sure this matters for the question of how reasonable we should think antirealism is—it might be a few levels less abstract than such concerns. But I don’t think it’s entirely obvious that it doesn’t, given the vagary to which I keep referring about what it would even mean for either moral pro-or-anti-realism to be correct. It might turn out that the least abstract principle we can judge it by is how we feel about its sensory consequences.
…carries the pragmatic implication that antirealists are more likely to be immoral people that threaten or manipulate others. Do you agree?
Eh, compared to who? I think most people are neither realists nor antirealists since they haven’t built the linguistic schema for either position to be expressable (I’m claiming that it’s not even possible to do so, but that’s neither here nor there). So antirealists are obviously heavily selected to be a certain type of nerd, which probably biases their population towards a generally nonviolent, relatively scrupulous and perhaps affable disposition.
But selection is different than cause, and I would guess among that nerdgroup, being utilitarian tends to cause one to be fractionally more likely to promote global utility, being contractualist tends to cause one to be fractionally more likely to uphold the social contract etc (I’m aware of the paper arguing that moral philosophers don’t seem to be particularly moral, but that was hardly robust science. And fwiw it vaguely suggests that older books—which bias heavily nonutilitarian—tempted more immorality).
The alternative is to believe that such people are all completely uninfluenced by their phenomenal experience of ‘belief’ in those philosophies, or that many of them are lying about having it (plausible, but leaves open the question of the effects of belief on behaviour of the ones who aren’t), or some othersuch surprising disjunction between their mental state and behaviour.
Why wouldn’t 6 be available to an antirealist? If I’m a utilitarian and they’re a utilitarian, I could convince them a course of action would maximize utility. This would be a bit like (1): convincing them that a course of action would be consistent with their values.
If by (7) what you mean by “correct” is demonstrating that a course of action is in line with the stance-independent moral facts, an antirealist couldn’t sincerely attempt to do that, but I don’t think this carries any significant practical implications.
I don’t think I need to have better access to someone’s values to make a compelling case. For instance, suppose I’m running a store and someone breaks in with a gun and demands I empty the cash register. I don’t have to know what their values are better than they do to point out that they are on lots of security cameras, or that the police are on their way, and so on. It isn’t that hard to appeal to people’s values when convincing them. We do this all the time.
And, for what it’s worth, I think that in practice the vast majority of the time (in fact, personally, I suspect virtually all the time except for rare cases of weird philosophers) what people are doing is appealing to a person’s own values, not attempting to convince them that their values are misaligned with the stance independent moral facts.
Part of the reason for this is that I don’t think most people are moral realists, so it wouldn’t make sense for them to argue on behalf of moral realism or to appeal to others under the presumption that they are moral realists.
Another reason I think this is because I don’t think the move from convincing someone what the stance-independent moral facts are to them acting in any particular way is that straightforward. You’d have to make a separate case for motivational internalism to show that convincing them is enough to motivate them, while if you instead abandon this, it’s possible the people you’re convincing can be persuaded of what the stance-independent moral facts are, but simply not care.
Speaking for myself, arguing for moral realism would have absolutely no impact on me. I don’t simply reject moral realism. I also deny that if there were stance-independent moral facts, that I’d have any motivation to comply with them (of course, I could be wrong about that). If I’m right, and if I have accurately introspected on my own values, then merely knowing something is stance-independently wrong wouldn’t change what I do at all. I simply don’t care if something is stance-independently moral or immoral. So why would persuading me of that matter?
Whether or not antirealists must rely, in practice, any more so on threats or manipulation than moral realists is an open empirical question. I predict that they don’t. If I had to make predictions, I’d instead predict that moral realists are more likely to threaten or manipulate people to comply with whatever they take the stance-independent moral facts to be. That at least strikes me as a viable alternative hypothesis. Either way, this is an empirical question, and I don’t know of any evidence that antirealists are in a worse position than realists. As an aside: even if they were, that wouldn’t be a good reason to reject the truth of moral antirealism. Reality may simply not include stance-independent moral facts. Even if that foreclosed one mode of persuasion, well, too bad! That’s how reality is.
As an aside, I think this remark:
…carries the pragmatic implication that antirealists are more likely to be immoral people that threaten or manipulate others. Do you agree?
What exactly is the argument against antirealism? Antirealists cannot honestly appeal to stance-independent moral facts when persuading others. I agree with that. But I don’t know why that should be taken as an argument against moral antirealism.
Well, I think the 6th collapses into the first and that the 7th has no practical benefits, so I’m not persuaded this is true. I do not think we’d be better off in any way at all if moral realism is true, and I am not convinced you’ve shown that we would be.
More generally, I simply deny anything about antirealism leaves in an especially weak position to rely on threats or manipulation. Antirealists can appeal to people’s values. And I think moral realists would have to do exactly the same thing. If the person in question doesn’t care about what’s true or isn’t motivated by what’s moral, then the realist is going to be in the exact same boat as the antirealist. The only thing the realist does is saddle themselves with more steps.
This is option 1: ‘Present me with real data that shows that on my current views, it would benefit me to vote for them’. Sometimes it’s available, but usually it isn’t.
‘Too bad! That’s how reality is’ is analogous to the statement ‘too bad! That’s how morality is’ in its lack of foundation. ‘Reality’ and ‘truth’ are not available to us. What we have is a stream of valenced sensory input whose nature seems to depend somewhat on our behaviours. In general, we change our behaviour in such a way to to get better valenced sensory input, such as ‘not feeling cognitive dissonance’, ‘not being in extreme physical pain’, ‘getting the satisfaction of symbols lining up in an intuitive way’, ‘seeing our loved ones prosper’ etc.
At a ‘macroscopic’ level, this sensory input generally resolves into mental processes approximately like ‘”believing” that there are “facts”, about which our beliefs can be “right” or “wrong”’ and ‘it’s generally better to be right about stuff’, and ‘logicians, particle physicists and perhaps hedge fund managers are generally more right about stuff than religious zealots’. But this is all ultimately pragmatic. If cognitive dissonance didn’t feel good us, and if looking for consistency in the world didn’t seem to lead to nicer outcomes, we wouldn’t do it, and we wouldn’t care about rightness—and there’s no fundamental sense in which it would be correct or even meaningful to say that we were wrong.
I’m not sure this matters for the question of how reasonable we should think antirealism is—it might be a few levels less abstract than such concerns. But I don’t think it’s entirely obvious that it doesn’t, given the vagary to which I keep referring about what it would even mean for either moral pro-or-anti-realism to be correct. It might turn out that the least abstract principle we can judge it by is how we feel about its sensory consequences.
Eh, compared to who? I think most people are neither realists nor antirealists since they haven’t built the linguistic schema for either position to be expressable (I’m claiming that it’s not even possible to do so, but that’s neither here nor there). So antirealists are obviously heavily selected to be a certain type of nerd, which probably biases their population towards a generally nonviolent, relatively scrupulous and perhaps affable disposition.
But selection is different than cause, and I would guess among that nerdgroup, being utilitarian tends to cause one to be fractionally more likely to promote global utility, being contractualist tends to cause one to be fractionally more likely to uphold the social contract etc (I’m aware of the paper arguing that moral philosophers don’t seem to be particularly moral, but that was hardly robust science. And fwiw it vaguely suggests that older books—which bias heavily nonutilitarian—tempted more immorality).
The alternative is to believe that such people are all completely uninfluenced by their phenomenal experience of ‘belief’ in those philosophies, or that many of them are lying about having it (plausible, but leaves open the question of the effects of belief on behaviour of the ones who aren’t), or some othersuch surprising disjunction between their mental state and behaviour.