Interesting post. I interpret the considerations you outline as forming a pretty good argument against moral realism. Partly that’s because I think that there’s a stronger approach to internalist anti-realism than the one you suggest. In particular, I interpret a statement that “X should do Y” as something like: “the moral theory which I endorse would tell X to do Y” (as discussed further here). And by “moral theory” I mean something like: a subset of my preferences which has certain properties, such as being concerned with how to treat others, not being specific to details of my life, being universalisable, etc. (Although the specific properties aren’t that important; it’s more of a familial resemblence.)
So I certainly wouldn’t say that Clippy should clip. And even if Clippy says that, I don’t need to agree that it’s true even from Clippy’s perspective. Firstly because Clippy doesn’t endorse any moral theory; and secondly because endorsements aren’t truth-apt. On this position, the confusion comes from saying “X should do Y” without treating it as a shorthand for “X should do Y according to Z; hooray for Z”.
From the perspective of morality-as-coordination-technology, this makes a lot of sense: there’s no point having moral discussions with an entity that doesn’t endorse any moral theory! (Or proto-theory, or set of inchoate intuitions; these are acceptable substitutes.) But within a community of people who have sufficiently similar moral views, we still have the ability to say “X is wrong” in a way that everyone agrees with.
Thanks for reading. Re: your version of anti-realism: is “I should create flourishing (or whatever your endorsed theory says)” in your mouth/from your perspective true, or not truth-apt?
To me Clippy’s having or not having a moral theory doesn’t seem very central. E.g., we can imagine versions in which Clippy (or some other human agent) is quite moralizing, non-specific, universal, etc about clipping, maximizing pain, or whatever.
It’s not truth-apt. It has a truth-apt component (that my moral theory endorses creating flourishing). But it also has a non-truth-apt component, namely “hooray my moral theory”. I think this gets you a lot of the benefits of cognitivism, while also distinguishing moral talk from standard truth-apt claims about my or other people’s preferences (which seems important, because agreeing that “Clippy was right when it said it should clip” feels very different from agreeing that”Clippy wants to clip”).
I can see how this was confusing in the original comment; sorry about that.
I think the intuition that Clippy’s position is very different from ours starts to weaken if Clippy has a moral theory. For example, at that point we might be able to reason with Clippy and say things like “well, would you want to be in pain?”, etc. It may even (optimistically) be the case that properties like non-specificity and universality are strong enough that any rational agent which strongly subscribes to them will end up with a reasonable moral system. But you’re right that it’s somewhat non-central, in that the main thrust of my argument doesn’t depend on it.
Interesting post. I interpret the considerations you outline as forming a pretty good argument against moral realism. Partly that’s because I think that there’s a stronger approach to internalist anti-realism than the one you suggest. In particular, I interpret a statement that “X should do Y” as something like: “the moral theory which I endorse would tell X to do Y” (as discussed further here). And by “moral theory” I mean something like: a subset of my preferences which has certain properties, such as being concerned with how to treat others, not being specific to details of my life, being universalisable, etc. (Although the specific properties aren’t that important; it’s more of a familial resemblence.)
So I certainly wouldn’t say that Clippy should clip. And even if Clippy says that, I don’t need to agree that it’s true even from Clippy’s perspective. Firstly because Clippy doesn’t endorse any moral theory; and secondly because endorsements aren’t truth-apt. On this position, the confusion comes from saying “X should do Y” without treating it as a shorthand for “X should do Y according to Z; hooray for Z”.
From the perspective of morality-as-coordination-technology, this makes a lot of sense: there’s no point having moral discussions with an entity that doesn’t endorse any moral theory! (Or proto-theory, or set of inchoate intuitions; these are acceptable substitutes.) But within a community of people who have sufficiently similar moral views, we still have the ability to say “X is wrong” in a way that everyone agrees with.
Thanks for reading. Re: your version of anti-realism: is “I should create flourishing (or whatever your endorsed theory says)” in your mouth/from your perspective true, or not truth-apt?
To me Clippy’s having or not having a moral theory doesn’t seem very central. E.g., we can imagine versions in which Clippy (or some other human agent) is quite moralizing, non-specific, universal, etc about clipping, maximizing pain, or whatever.
It’s not truth-apt. It has a truth-apt component (that my moral theory endorses creating flourishing). But it also has a non-truth-apt component, namely “hooray my moral theory”. I think this gets you a lot of the benefits of cognitivism, while also distinguishing moral talk from standard truth-apt claims about my or other people’s preferences (which seems important, because agreeing that “Clippy was right when it said it should clip” feels very different from agreeing that”Clippy wants to clip”).
I can see how this was confusing in the original comment; sorry about that.
I think the intuition that Clippy’s position is very different from ours starts to weaken if Clippy has a moral theory. For example, at that point we might be able to reason with Clippy and say things like “well, would you want to be in pain?”, etc. It may even (optimistically) be the case that properties like non-specificity and universality are strong enough that any rational agent which strongly subscribes to them will end up with a reasonable moral system. But you’re right that it’s somewhat non-central, in that the main thrust of my argument doesn’t depend on it.