The version of anti-realism I’m arguing for in this sequence is a blend of error theory and non-objectivism. It seems to me that any anti-realist has to endorse error theory (in some sense at least) because realists exist, and it would be uncharitable not to interpret their claims in the realist fashion. However, the non-objectivist perspective seems importantly correct as well
I think we probably have very similar views, but I am less of a fan of error theory. What might it look like to endorse error theory as an anti-realist? Well, as an anti-realist I think that my claims about morality are perfectly reasonable and often true, since I intend them to be speaker-dependent. It’s just the moral realists whose claims are in error. So that leads to the bizarre situation where I can have a conversation about object-level morality with a moral realist, and we might even change each other’s minds, but throughout the whole conversation I’m evaluating every statement he says as trivially incorrect. This seems untenable.
Even anti-realists can adopt the notion of “moral facts,” provided that we think of them as facts about a non-objective (speaker-dependent) reality, instead of facts about a speaker-independent (objective) one.
Again, I expect we mostly agree here, but the phrase “facts about a non-objective (speaker-dependent) reality” feels potentially confusing to me. Would you consider it equivalent to say that anti-realists can think about moral facts as facts about the implications of certain evaluation criteria? From this perspective, when we make moral claims, we’re implicitly endorsing a set of evaluation criteria (making this position somewhere in the middle of cognitivism and non-cognitivism).
I’ve fleshed out this position a little more in this post on “a pragmatic approach to interpreting moral claims”.
I think we probably have very similar views, but I am less of a fan of error theory. What might it look like to endorse error theory as an anti-realist? Well, as an anti-realist I think that my claims about morality are perfectly reasonable and often true, since I intend them to be speaker-dependent. It’s just the moral realists whose claims are in error.
This is how I think of it, yeah. Non-objectivism for the anti-realist and error theory for the realist.
So that leads to the bizarre situation where I can have a conversation about object-level morality with a moral realist, and we might even change each other’s minds, but throughout the whole conversation I’m evaluating every statement he says as trivially incorrect. This seems untenable.
I see. You could switch back and forth between two ways of interpreting the realist’s moral claims. On the one hand, they are making some kind of error. But as you say, you can still have fruitful discussions. I’d characterize the second interpretation as “pragmatically re-interpreting realist claims.” I think this matches what you propose in your blogpost (as a way to interpret all moral claims)! :)
Again, I expect we mostly agree here, but the phrase “facts about a non-objective (speaker-dependent) reality” feels potentially confusing to me. Would you consider it equivalent to say that anti-realists can think about moral facts as facts about the implications of certain evaluation criteria? From this perspective, when we make moral claims, we’re implicitly endorsing a set of evaluation criteria (making this position somewhere in the middle of cognitivism and non-cognitivism).
Cool, glad we’re on the same page. The following is a fairly minor point, but thought it might still be worth clarifying.
“You could switch back and forth between two ways of interpreting the realist’s moral claims.”
I guess that, while in principle this makes sense, in practice language is defined on a community level, and so it’s just asking for confusion to hold this position. In particular, ethics is not cleanly separable from meta-ethics, and so I can’t always reinterpret a realist’s argument in a pragmatic way without losing something. But if realists use ‘morality’ to always implicitly mean ‘objective morality’, then I don’t know when they’re relying on the ‘objective’ bit in their arguments. That seems bad.
The alternative is to agree on a “lowest common denominator” definition of morality, and expect people who are relying on its objectiveness or subjectivity to explicitly flag that. As an analogy, imagine that person A thinks we live in a simulation, and person B doesn’t, and person B tries to define “cats” so that their definition includes the criterion “physically implemented in the real world, not just in a simulation”. In which case person A believes that no cats exist, in that sense.
I think the correct response from A is to say “No, you’re making a power grab for common linguistic territory, which I don’t accept. We should define ‘cats’ in a way that doesn’t make it a vacuous concept for many members of our epistemic community. So I won’t define cats as ‘simulated beings’ and you won’t define them as ‘physical beings’, and if one of your arguments about cats relies on this distinction, then you should make that explicit.”
I could equivalently describe the above position as: “when your conception of something looks like Network 2, but not everyone agrees, then your definitions should look like Network 1.”
But if realists use ‘morality’ to always implicitly mean ‘objective morality’, then I don’t know when they’re relying on the ‘objective’ bit in their arguments. That seems bad.
Okay, you’re definitely right that it would be weird to always (also) interpret realists as making an error. How about this:
When a realist makes a moral claim where it matters that they believe in realism, we interpret their claim in the sense of error theory.
When the same realist makes a moral claim that’s easily separable from their belief in realism, we interpret their claim according to what you call the “lowest common denominator.”
Sometimes we may not be able to tell whether a person’s belief in realism influences their first-order moral claims. Those cases could benefit from clarifying questions.
So, instead of switching back and forth between two interpretations, we only hold one interpretation or the other (if someone clearly commits themselves to either objectivism or non-objectivism), or we treat the moral claim with an under-determined interpretation that’s compatible with both realism or anti-realism (the lowest common denominator.)
I agree that this^ is much better (and more charitable) than interpreting realists as always making an error! (When I thought of realists making moral claims, I primarily envisioned a discussion about metaethics.)
The alternative is to agree on a “lowest common denominator” definition of morality, and expect people who are relying on its objectiveness or subjectivity to explicitly flag that.
That makes sense. I called the pragmatic re-interpretation “non-objectivism” in my post, but terminology-wise, that’s a bit unfair because it already presupposes anti-realism. “Not-necessarily-objectivism” would be a term that’s more neutral. This seems appropriate for everyday moral discourse.
(The reason I think anti-realism is compelling is because the lowest common denominator already feels like it’s enough to get all the object-level reasoning off the ground.)
I could equivalently describe the above position as: “when your conception of something looks like Network 2, but not everyone agrees, then your definitions should look like Network 1.”
I’d say it depends on what “mode” you’re in. Your point certainly applies when it comes to descriptively interpreting what people mean. But there’s also the meliorist mode of trying to nudge people towards more useful concepts. A lot of folk concepts get stretched beyond their limits extremely quickly when the discussions switch from everyday contexts to more philosophical ones. Trying to do philosophy without improving our concepts seems like trying to build skyscrapers with only axes and knives. (Of course, you probably agree with this.)
There’s also a question about whether you, as an anti-realist, consider realism to be clearly mistaken, or whether you think it might be the case that realists just think within a very different conceptual repertoire. If it’s only the latter, it would be uncharitable to ever consider them as “making an error.” My view is that there are certainly cases where I’m not sure (or where I’d say I agree with some versions of realism but don’t find them quite ‘worthy of the name’), but in many instances I’d say realists are committing a real error. One that they would recognize by their own standards. That’s why I wrote this sequence, I’m hoping to eventually change the minds of at least a bunch of realists. :)
I think we probably have very similar views, but I am less of a fan of error theory. What might it look like to endorse error theory as an anti-realist? Well, as an anti-realist I think that my claims about morality are perfectly reasonable and often true, since I intend them to be speaker-dependent. It’s just the moral realists whose claims are in error. So that leads to the bizarre situation where I can have a conversation about object-level morality with a moral realist, and we might even change each other’s minds, but throughout the whole conversation I’m evaluating every statement he says as trivially incorrect. This seems untenable.
Again, I expect we mostly agree here, but the phrase “facts about a non-objective (speaker-dependent) reality” feels potentially confusing to me. Would you consider it equivalent to say that anti-realists can think about moral facts as facts about the implications of certain evaluation criteria? From this perspective, when we make moral claims, we’re implicitly endorsing a set of evaluation criteria (making this position somewhere in the middle of cognitivism and non-cognitivism).
I’ve fleshed out this position a little more in this post on “a pragmatic approach to interpreting moral claims”.
This is how I think of it, yeah. Non-objectivism for the anti-realist and error theory for the realist.
I see. You could switch back and forth between two ways of interpreting the realist’s moral claims. On the one hand, they are making some kind of error. But as you say, you can still have fruitful discussions. I’d characterize the second interpretation as “pragmatically re-interpreting realist claims.” I think this matches what you propose in your blogpost (as a way to interpret all moral claims)! :)
Yes, exactly. That sounds clearer.
Cool, glad we’re on the same page. The following is a fairly minor point, but thought it might still be worth clarifying.
“You could switch back and forth between two ways of interpreting the realist’s moral claims.”
I guess that, while in principle this makes sense, in practice language is defined on a community level, and so it’s just asking for confusion to hold this position. In particular, ethics is not cleanly separable from meta-ethics, and so I can’t always reinterpret a realist’s argument in a pragmatic way without losing something. But if realists use ‘morality’ to always implicitly mean ‘objective morality’, then I don’t know when they’re relying on the ‘objective’ bit in their arguments. That seems bad.
The alternative is to agree on a “lowest common denominator” definition of morality, and expect people who are relying on its objectiveness or subjectivity to explicitly flag that. As an analogy, imagine that person A thinks we live in a simulation, and person B doesn’t, and person B tries to define “cats” so that their definition includes the criterion “physically implemented in the real world, not just in a simulation”. In which case person A believes that no cats exist, in that sense.
I think the correct response from A is to say “No, you’re making a power grab for common linguistic territory, which I don’t accept. We should define ‘cats’ in a way that doesn’t make it a vacuous concept for many members of our epistemic community. So I won’t define cats as ‘simulated beings’ and you won’t define them as ‘physical beings’, and if one of your arguments about cats relies on this distinction, then you should make that explicit.”
This post is (as usual) relevant: https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside
I could equivalently describe the above position as: “when your conception of something looks like Network 2, but not everyone agrees, then your definitions should look like Network 1.”
Okay, you’re definitely right that it would be weird to always (also) interpret realists as making an error. How about this:
When a realist makes a moral claim where it matters that they believe in realism, we interpret their claim in the sense of error theory.
When the same realist makes a moral claim that’s easily separable from their belief in realism, we interpret their claim according to what you call the “lowest common denominator.”
Sometimes we may not be able to tell whether a person’s belief in realism influences their first-order moral claims. Those cases could benefit from clarifying questions.
So, instead of switching back and forth between two interpretations, we only hold one interpretation or the other (if someone clearly commits themselves to either objectivism or non-objectivism), or we treat the moral claim with an under-determined interpretation that’s compatible with both realism or anti-realism (the lowest common denominator.)
I agree that this^ is much better (and more charitable) than interpreting realists as always making an error! (When I thought of realists making moral claims, I primarily envisioned a discussion about metaethics.)
That makes sense. I called the pragmatic re-interpretation “non-objectivism” in my post, but terminology-wise, that’s a bit unfair because it already presupposes anti-realism. “Not-necessarily-objectivism” would be a term that’s more neutral. This seems appropriate for everyday moral discourse.
(The reason I think anti-realism is compelling is because the lowest common denominator already feels like it’s enough to get all the object-level reasoning off the ground.)
I’d say it depends on what “mode” you’re in. Your point certainly applies when it comes to descriptively interpreting what people mean. But there’s also the meliorist mode of trying to nudge people towards more useful concepts. A lot of folk concepts get stretched beyond their limits extremely quickly when the discussions switch from everyday contexts to more philosophical ones. Trying to do philosophy without improving our concepts seems like trying to build skyscrapers with only axes and knives. (Of course, you probably agree with this.)
There’s also a question about whether you, as an anti-realist, consider realism to be clearly mistaken, or whether you think it might be the case that realists just think within a very different conceptual repertoire. If it’s only the latter, it would be uncharitable to ever consider them as “making an error.” My view is that there are certainly cases where I’m not sure (or where I’d say I agree with some versions of realism but don’t find them quite ‘worthy of the name’), but in many instances I’d say realists are committing a real error. One that they would recognize by their own standards. That’s why I wrote this sequence, I’m hoping to eventually change the minds of at least a bunch of realists. :)