I don’t think I need to have better access to someone’s values to make a compelling case. For instance, suppose I’m running a store and someone breaks in with a gun and demands I empty the cash register. I don’t have to know what their values are better than they do to point out that they are on lots of security cameras, or that the police are on their way, and so on. It isn’t that hard to appeal to people’s values when convincing them. We do this all the time.
This is option 1: ‘Present me with real data that shows that on my current views, it would benefit me to vote for them’. Sometimes it’s available, but usually it isn’t.
Even if that foreclosed one mode of persuasion, well, too bad! That’s how reality is.
‘Too bad! That’s how reality is’ is analogous to the statement ‘too bad! That’s how morality is’ in its lack of foundation. ‘Reality’ and ‘truth’ are not available to us. What we have is a stream of valenced sensory input whose nature seems to depend somewhat on our behaviours. In general, we change our behaviour in such a way to to get better valenced sensory input, such as ‘not feeling cognitive dissonance’, ‘not being in extreme physical pain’, ‘getting the satisfaction of symbols lining up in an intuitive way’, ‘seeing our loved ones prosper’ etc.
At a ‘macroscopic’ level, this sensory input generally resolves into mental processes approximately like ‘”believing” that there are “facts”, about which our beliefs can be “right” or “wrong”’ and ‘it’s generally better to be right about stuff’, and ‘logicians, particle physicists and perhaps hedge fund managers are generally more right about stuff than religious zealots’. But this is all ultimately pragmatic. If cognitive dissonance didn’t feel good us, and if looking for consistency in the world didn’t seem to lead to nicer outcomes, we wouldn’t do it, and we wouldn’t care about rightness—and there’s no fundamental sense in which it would be correct or even meaningful to say that we were wrong.
I’m not sure this matters for the question of how reasonable we should think antirealism is—it might be a few levels less abstract than such concerns. But I don’t think it’s entirely obvious that it doesn’t, given the vagary to which I keep referring about what it would even mean for either moral pro-or-anti-realism to be correct. It might turn out that the least abstract principle we can judge it by is how we feel about its sensory consequences.
…carries the pragmatic implication that antirealists are more likely to be immoral people that threaten or manipulate others. Do you agree?
Eh, compared to who? I think most people are neither realists nor antirealists since they haven’t built the linguistic schema for either position to be expressable (I’m claiming that it’s not even possible to do so, but that’s neither here nor there). So antirealists are obviously heavily selected to be a certain type of nerd, which probably biases their population towards a generally nonviolent, relatively scrupulous and perhaps affable disposition.
But selection is different than cause, and I would guess among that nerdgroup, being utilitarian tends to cause one to be fractionally more likely to promote global utility, being contractualist tends to cause one to be fractionally more likely to uphold the social contract etc (I’m aware of the paper arguing that moral philosophers don’t seem to be particularly moral, but that was hardly robust science. And fwiw it vaguely suggests that older books—which bias heavily nonutilitarian—tempted more immorality).
The alternative is to believe that such people are all completely uninfluenced by their phenomenal experience of ‘belief’ in those philosophies, or that many of them are lying about having it (plausible, but leaves open the question of the effects of belief on behaviour of the ones who aren’t), or some othersuch surprising disjunction between their mental state and behaviour.
This is option 1: ‘Present me with real data that shows that on my current views, it would benefit me to vote for them’. Sometimes it’s available, but usually it isn’t.
‘Too bad! That’s how reality is’ is analogous to the statement ‘too bad! That’s how morality is’ in its lack of foundation. ‘Reality’ and ‘truth’ are not available to us. What we have is a stream of valenced sensory input whose nature seems to depend somewhat on our behaviours. In general, we change our behaviour in such a way to to get better valenced sensory input, such as ‘not feeling cognitive dissonance’, ‘not being in extreme physical pain’, ‘getting the satisfaction of symbols lining up in an intuitive way’, ‘seeing our loved ones prosper’ etc.
At a ‘macroscopic’ level, this sensory input generally resolves into mental processes approximately like ‘”believing” that there are “facts”, about which our beliefs can be “right” or “wrong”’ and ‘it’s generally better to be right about stuff’, and ‘logicians, particle physicists and perhaps hedge fund managers are generally more right about stuff than religious zealots’. But this is all ultimately pragmatic. If cognitive dissonance didn’t feel good us, and if looking for consistency in the world didn’t seem to lead to nicer outcomes, we wouldn’t do it, and we wouldn’t care about rightness—and there’s no fundamental sense in which it would be correct or even meaningful to say that we were wrong.
I’m not sure this matters for the question of how reasonable we should think antirealism is—it might be a few levels less abstract than such concerns. But I don’t think it’s entirely obvious that it doesn’t, given the vagary to which I keep referring about what it would even mean for either moral pro-or-anti-realism to be correct. It might turn out that the least abstract principle we can judge it by is how we feel about its sensory consequences.
Eh, compared to who? I think most people are neither realists nor antirealists since they haven’t built the linguistic schema for either position to be expressable (I’m claiming that it’s not even possible to do so, but that’s neither here nor there). So antirealists are obviously heavily selected to be a certain type of nerd, which probably biases their population towards a generally nonviolent, relatively scrupulous and perhaps affable disposition.
But selection is different than cause, and I would guess among that nerdgroup, being utilitarian tends to cause one to be fractionally more likely to promote global utility, being contractualist tends to cause one to be fractionally more likely to uphold the social contract etc (I’m aware of the paper arguing that moral philosophers don’t seem to be particularly moral, but that was hardly robust science. And fwiw it vaguely suggests that older books—which bias heavily nonutilitarian—tempted more immorality).
The alternative is to believe that such people are all completely uninfluenced by their phenomenal experience of ‘belief’ in those philosophies, or that many of them are lying about having it (plausible, but leaves open the question of the effects of belief on behaviour of the ones who aren’t), or some othersuch surprising disjunction between their mental state and behaviour.