Thanks for reading. Re: your version of anti-realism: is “I should create flourishing (or whatever your endorsed theory says)” in your mouth/from your perspective true, or not truth-apt?
To me Clippy’s having or not having a moral theory doesn’t seem very central. E.g., we can imagine versions in which Clippy (or some other human agent) is quite moralizing, non-specific, universal, etc about clipping, maximizing pain, or whatever.
It’s not truth-apt. It has a truth-apt component (that my moral theory endorses creating flourishing). But it also has a non-truth-apt component, namely “hooray my moral theory”. I think this gets you a lot of the benefits of cognitivism, while also distinguishing moral talk from standard truth-apt claims about my or other people’s preferences (which seems important, because agreeing that “Clippy was right when it said it should clip” feels very different from agreeing that”Clippy wants to clip”).
I can see how this was confusing in the original comment; sorry about that.
I think the intuition that Clippy’s position is very different from ours starts to weaken if Clippy has a moral theory. For example, at that point we might be able to reason with Clippy and say things like “well, would you want to be in pain?”, etc. It may even (optimistically) be the case that properties like non-specificity and universality are strong enough that any rational agent which strongly subscribes to them will end up with a reasonable moral system. But you’re right that it’s somewhat non-central, in that the main thrust of my argument doesn’t depend on it.
Thanks for reading. Re: your version of anti-realism: is “I should create flourishing (or whatever your endorsed theory says)” in your mouth/from your perspective true, or not truth-apt?
To me Clippy’s having or not having a moral theory doesn’t seem very central. E.g., we can imagine versions in which Clippy (or some other human agent) is quite moralizing, non-specific, universal, etc about clipping, maximizing pain, or whatever.
It’s not truth-apt. It has a truth-apt component (that my moral theory endorses creating flourishing). But it also has a non-truth-apt component, namely “hooray my moral theory”. I think this gets you a lot of the benefits of cognitivism, while also distinguishing moral talk from standard truth-apt claims about my or other people’s preferences (which seems important, because agreeing that “Clippy was right when it said it should clip” feels very different from agreeing that”Clippy wants to clip”).
I can see how this was confusing in the original comment; sorry about that.
I think the intuition that Clippy’s position is very different from ours starts to weaken if Clippy has a moral theory. For example, at that point we might be able to reason with Clippy and say things like “well, would you want to be in pain?”, etc. It may even (optimistically) be the case that properties like non-specificity and universality are strong enough that any rational agent which strongly subscribes to them will end up with a reasonable moral system. But you’re right that it’s somewhat non-central, in that the main thrust of my argument doesn’t depend on it.