There are a lot of convincing arguments against certain forms of moral realism but there are neither good arguments nor evidence that would allow us to entirely write off the possibility of an answer to these questions existing that is objectively and universally true. (there is no proof that objectivist moral realism is not true)
That’s an interesting endnote.
I think the arguments against this type of non-naturalist moral realism (or “moral realism based on irreducible normativity,” as I sometimes call it) are indeed pretty decisive – but I’d agree that you can’t get to 100%. Still, do you have thoughts on the success criteria for how someone could determine if they have found the answer to universal objective meaning? If not, why do you think there’s an answer? Without knowing anything about the concept’s content and without understanding the success criteria for having found the right content, is there a way for the concept to have a well-specified meaning (instead of being a pointer to a subjective feeling)? Based on my understanding of how words obtain reference, I don’t see any such way.
In any case, the endnote makes it sound like you’d want people to continue searching anyway because of some wager. But why does universal objective meaning trump subjective meaning? That seems to beg the question..
Also, putting time and effort into researching the content of some obscure concept (that most likely doesn’t have any content) has opportunity costs. What would you do if, even after training the future’s most advanced AI systems to do philosophy, the answer continues to be “We can’t answer your question; you’re using concepts in a way that doesn’t make sense”?
If continuing the search would take up most of the world’s resources, at what point would you say “Okay, I’m sufficiently convinced that this endeavor – the search for OUM – was misguided. Let’s optimize for things that people find subjectively good and meaningful, like helping others, reducing suffering, accomplishing personal life goals, perhaps (for some people) creating new happy people, etc.”
If there would come such a point, then why do you think we haven’t yet reached it? (Or maybe we could say we have mostly reached it, but it could be worthwhile to keep the possibility of moral realism based on irreducible normativity in the back of our heads to double-check our assumptions in case we ever build those AI philosophy advisors?)
Alternatively, if there’d be no such point where you’d give up on an (ex hypothesi increasingly more costly) search, doesn’t that seem strangely fanatical?
(To be clear, I don’t think everything that goes under the label “non-naturalist moral realism” is >99% likely to be meaningless or confused. Some of it just seems under-defined and therefore a bit pointless, and some of it seems to be so similar to naturalist moral realism that we can just discuss the arguments for and against naturalist moral realism instead – which are pretty different and don’t apply to the way the OP is arguing.)
I do not agree that UOM is necessarily non-naturalist in essence, it might very well be that some natural property of the world turns out to be synonymous with good/meaningful/right/UOM. I am currently agnostic in regards to this. (I might be misunderstanding the terminology, though.)
>Without knowing anything about the concept’s content and without understanding the success criteria for having found the right content, is there a way for the concept to have a well-specified meaning (instead of being a pointer to a subjective feeling)?
This is a valid point and you are right, I currently do not claim to know either content or success criteria. If this still allows for the concept to have ‘well-specified meaning’ depends on your definition of well-specified meaning. I claim that neither content nor success criteria are necessary for the concept to be well-specified enough that it retains its powerful metaphysical and practical implications: fulfillment of UOM being literally objectively good/right/meaningful and therefore highly relevant for how you live your life.
You make two good points:
1 Why should objective meaning trump subjective meaning?
2 The search for UOM might have some counter-intuitive implications in practice. (opportunity costs, When do we give up?, Why don’t we have a reason to give up now?, Doesn’t this seem fanatical?)
1: Why should objective meaning trump subjective meaning?
You are right, this begs the question, but so do subjectivist stances.
Why should you do what feels meaningful or what feels right? Because it feels meaningful/feels right. Objectively, this might only feel meaningful due to arbitrary environmental circumstances: e.g. actions of reproductive fitness like human procreation feel meaningful because of us being the product of evolution, the process which will necessarily produce agents that have the subjective intuition that reproductive fitness is meaningful. However, subjectively we obviously do not care about this arbitrariness of the set of actions that feel good/meaningful.
Why should you do what is objectively universally meaningful or what you ought to do? Because it is per definition objectively universally meaningful and what you ought to do. Also, because you might subjectively feel like you should do what is truly (universally objectively) meaningful rather than just what feels meaningful.
In the case of UOM, there is always a rational argument to do what UOM implies that is independent of subjective feelings. (as it is based on what is objectively true) However, any agent is per definition subjective and it is subjective whether they find this rational argument compelling. So in some sense actually adherence to both of these is powered by subjective intuitions about meaning:
Do you have subjective intuitions that what you do should be objectively good? Or do you have subjective intuitions that what you should do is what should feel good?
An agent that happens to have had the former intuition (maybe a hardcore rationalist/naturalist) will be pulled towards the sphere of influence of the objectivist belief system, UOM.
2: search for UOM might have some counter-intuitive implications in practice. (opportunity costs, When do we give up?, Why don’t we have a reason to give up now?, Doesn’t this seem fanatical?)
You raise a good point about why search for UOM might in practice be absurd in some circumstances and I like the thought experiments. I think our crux might lie in our priors about the current extent of human intellect and our level of understanding of the universe.
I agree that increasingly costly fanatic search for UOM at a point where you have some evidence pointing towards its non-existence (omniscient AI telling you it does not exist) is absurd.
One heuristic for deciding whether further search for UOM would be misguided could be to consider current knowledge of the universe and the nature of reality, current rate of change of that knowledge and existence of evidence that there is no UOM. If knowledge is high, rate of change is almost zero (i.e. we seem to be converging on maximum understanding) and especially if there is evidence of non-existence, search for UOM is likely misguided.
I think we are far from this point currently, knowing almost nothing about the universe and not even knowing the full extent of how much we actually do not know about it. Therefore I think that working towards search for UOM (which currently mostly implies x-risk reduction anyways) is currently far from fanatic or absurd. On the contrary, I believe that it is in some sense endlessly primitive and hubristic to take our own (arbitrary) subjective feelings about morality and the meaning of life as the non-plus-ultra and discount the possibility of UOM. (and potentially lock in these potentially objectively meaningless subjectivist stances forever if a certain form of AGI alignment is successful)
In this sense, I currently think that the opportunity costs of not considering UOM are (astronomically) higher than of considering it. This might definitely change in the future or in some sufficiently sophisticated thought experiment but I think we are in practice far from this point.
I do not agree that UOM is necessarily non-naturalist in essence, it might very well be that some natural property of the world turns out to be synonymous with good/meaningful/right/UOM.
Views that say “we don’t know the content of good/meaningful/right but it’s what’s important nonetheless” are usually non-naturalist because of the open-question argument: For any naturalist property we might identify as synonymous with good/meaningful/right, one can ask “Did we really identify the right property?”
Moral naturalists would answer: “That’s a superfluous question. We’ve already determined that the property in question is relevant to things we care about in ways xyz. That’s what we mean when we use moral terminology.”
By contrast, non-naturalists believe that the open question argument has a point. I’d say the same intuition that drives the open question argument against moral naturalism seems to be a core intuition behind your post. The intuition says that the concepts “good/meaningful/right” have a well-specified and action-relevant meaning even though we’re clueless about it and can’t describe the success criteria for having found the answer.
(Some non-naturalists might say is that good/meaningful/right may turn out to be co-extensional with some natural property, but not synonymous. This feels a bit like trying to have the cake and eat it; I’m confused about how to interpret that sort of talk. I can’t think up a good story of how we could come into the epistemic position of understanding that non-naturalist moral concepts are co-extensional with specific naturalist concepts while maintaining that “things could have been otherwise.”)
I don’t think these distinctions are inherently particularly important, but it’s useful to think about whether your brand of moral realism is more likely to fail because of (1) “accommodation charges” (“queerness”) or due to (2) expert moral disagreement / not being able to compellingly demonstrate that a specific natural property is unambiguously the thing everyone (who’s altruistic?) ought to orient their lives towards. (I’d have thought that 2 is more typically associated with moral naturalism, but there seem to be edge cases. For instance, Parfit’s metaethical view strikes me as more susceptible to counterarguments of the second type – see for instance his “climbing the same mountain” anology or his claim that his life’s work would be in vain if he’s wrong about his convergence arguments. While I’ve seen people call Parfit’s view “non-naturalist” in some places, I’ve also heard the term “quietist,” which seems to have the loose meaning of “kind of non-naturalist, but the naturalism vs. non-naturalism is beside the point and my view therefore doesn’t make any strange metaphysical claims.” In any case, it seems to me that Parfit thinks we know a great deal about “good/meaningful/right” and, moreover, that this knowledge is essential for his particular metaethical position, so his brand of moral realism seems strictly different from yours.)
You are right, this begs the question, but so do subjectivist stances.
Subjectivist stances feel more intellectually satisfying, IMO. I argue here that a moral ontology (“conceptual option space”) based on subjective life goals fulfills the following criteria:
Relevant: Life goals matter to us by definition.
Complete: The life-goals framework allows us to ask any (prescriptive)[33] ethics-related questions we might be interested in, as long as these questions are clear/meaningful. (In the appendix, I’ll sketch what this could look like for a broad range of questions. Of course, there’s a class of questions that don’t fit into the framework. As I have argued in previous posts, questions about irreducible normativity don’t seem meaningful.)
Clear: The life-goals framework doesn’t contain confused terminology. Some features may still be vague or left under-defined, but the questions and thoughts we can express within the framework are (so I hope) intelligible.
By contrast, I think non-naturalist views fail some of these criteria. Elsewhere (see links in my previous comment), I’ve argued that moral realism based on irreducible normativity is not worth wanting because it cannot be made to work in anywhere close to the way our intuition about the terminology would indicate (similar to the concept of libertarian free will).
One heuristic for deciding whether further search for UOM would be misguided could be to consider current knowledge of the universe and the nature of reality, current rate of change of that knowledge and existence of evidence that there is no UOM. If knowledge is high, rate of change is almost zero (i.e. we seem to be converging on maximum understanding) and especially if there is evidence of non-existence, search for UOM is likely misguided.
I think we are far from this point currently, knowing almost nothing about the universe and not even knowing the full extent of how much we actually do not know about it.
I feel like the disagreement is more about “how to reason” than “how much do we know?.” My main argument against something like UOM is “this contains concepts that aren’t part of my philosophical repertoire.” I argue for this in posts 2 and 3 of my anti-realism sequence.
I have more thoughts but they’re best explained in the sequence.
That’s an interesting endnote.
I think the arguments against this type of non-naturalist moral realism (or “moral realism based on irreducible normativity,” as I sometimes call it) are indeed pretty decisive – but I’d agree that you can’t get to 100%. Still, do you have thoughts on the success criteria for how someone could determine if they have found the answer to universal objective meaning? If not, why do you think there’s an answer? Without knowing anything about the concept’s content and without understanding the success criteria for having found the right content, is there a way for the concept to have a well-specified meaning (instead of being a pointer to a subjective feeling)? Based on my understanding of how words obtain reference, I don’t see any such way.
In any case, the endnote makes it sound like you’d want people to continue searching anyway because of some wager. But why does universal objective meaning trump subjective meaning? That seems to beg the question..
Also, putting time and effort into researching the content of some obscure concept (that most likely doesn’t have any content) has opportunity costs. What would you do if, even after training the future’s most advanced AI systems to do philosophy, the answer continues to be “We can’t answer your question; you’re using concepts in a way that doesn’t make sense”?
If continuing the search would take up most of the world’s resources, at what point would you say “Okay, I’m sufficiently convinced that this endeavor – the search for OUM – was misguided. Let’s optimize for things that people find subjectively good and meaningful, like helping others, reducing suffering, accomplishing personal life goals, perhaps (for some people) creating new happy people, etc.”
If there would come such a point, then why do you think we haven’t yet reached it? (Or maybe we could say we have mostly reached it, but it could be worthwhile to keep the possibility of moral realism based on irreducible normativity in the back of our heads to double-check our assumptions in case we ever build those AI philosophy advisors?)
Alternatively, if there’d be no such point where you’d give up on an (ex hypothesi increasingly more costly) search, doesn’t that seem strangely fanatical?
(To be clear, I don’t think everything that goes under the label “non-naturalist moral realism” is >99% likely to be meaningless or confused. Some of it just seems under-defined and therefore a bit pointless, and some of it seems to be so similar to naturalist moral realism that we can just discuss the arguments for and against naturalist moral realism instead – which are pretty different and don’t apply to the way the OP is arguing.)
Thanks for your comment, very interesting!
I do not agree that UOM is necessarily non-naturalist in essence, it might very well be that some natural property of the world turns out to be synonymous with good/meaningful/right/UOM. I am currently agnostic in regards to this. (I might be misunderstanding the terminology, though.)
>Without knowing anything about the concept’s content and without understanding the success criteria for having found the right content, is there a way for the concept to have a well-specified meaning (instead of being a pointer to a subjective feeling)?
This is a valid point and you are right, I currently do not claim to know either content or success criteria. If this still allows for the concept to have ‘well-specified meaning’ depends on your definition of well-specified meaning. I claim that neither content nor success criteria are necessary for the concept to be well-specified enough that it retains its powerful metaphysical and practical implications: fulfillment of UOM being literally objectively good/right/meaningful and therefore highly relevant for how you live your life.
You make two good points:
1 Why should objective meaning trump subjective meaning?
2 The search for UOM might have some counter-intuitive implications in practice. (opportunity costs, When do we give up?, Why don’t we have a reason to give up now?, Doesn’t this seem fanatical?)
1: Why should objective meaning trump subjective meaning?
You are right, this begs the question, but so do subjectivist stances.
Why should you do what feels meaningful or what feels right? Because it feels meaningful/feels right. Objectively, this might only feel meaningful due to arbitrary environmental circumstances: e.g. actions of reproductive fitness like human procreation feel meaningful because of us being the product of evolution, the process which will necessarily produce agents that have the subjective intuition that reproductive fitness is meaningful. However, subjectively we obviously do not care about this arbitrariness of the set of actions that feel good/meaningful.
Why should you do what is objectively universally meaningful or what you ought to do? Because it is per definition objectively universally meaningful and what you ought to do. Also, because you might subjectively feel like you should do what is truly (universally objectively) meaningful rather than just what feels meaningful.
In the case of UOM, there is always a rational argument to do what UOM implies that is independent of subjective feelings. (as it is based on what is objectively true) However, any agent is per definition subjective and it is subjective whether they find this rational argument compelling. So in some sense actually adherence to both of these is powered by subjective intuitions about meaning:
Do you have subjective intuitions that what you do should be objectively good? Or do you have subjective intuitions that what you should do is what should feel good?
An agent that happens to have had the former intuition (maybe a hardcore rationalist/naturalist) will be pulled towards the sphere of influence of the objectivist belief system, UOM.
2: search for UOM might have some counter-intuitive implications in practice. (opportunity costs, When do we give up?, Why don’t we have a reason to give up now?, Doesn’t this seem fanatical?)
You raise a good point about why search for UOM might in practice be absurd in some circumstances and I like the thought experiments. I think our crux might lie in our priors about the current extent of human intellect and our level of understanding of the universe.
I agree that increasingly costly fanatic search for UOM at a point where you have some evidence pointing towards its non-existence (omniscient AI telling you it does not exist) is absurd.
One heuristic for deciding whether further search for UOM would be misguided could be to consider current knowledge of the universe and the nature of reality, current rate of change of that knowledge and existence of evidence that there is no UOM. If knowledge is high, rate of change is almost zero (i.e. we seem to be converging on maximum understanding) and especially if there is evidence of non-existence, search for UOM is likely misguided.
I think we are far from this point currently, knowing almost nothing about the universe and not even knowing the full extent of how much we actually do not know about it. Therefore I think that working towards search for UOM (which currently mostly implies x-risk reduction anyways) is currently far from fanatic or absurd. On the contrary, I believe that it is in some sense endlessly primitive and hubristic to take our own (arbitrary) subjective feelings about morality and the meaning of life as the non-plus-ultra and discount the possibility of UOM. (and potentially lock in these potentially objectively meaningless subjectivist stances forever if a certain form of AGI alignment is successful)
In this sense, I currently think that the opportunity costs of not considering UOM are (astronomically) higher than of considering it. This might definitely change in the future or in some sufficiently sophisticated thought experiment but I think we are in practice far from this point.
Views that say “we don’t know the content of good/meaningful/right but it’s what’s important nonetheless” are usually non-naturalist because of the open-question argument: For any naturalist property we might identify as synonymous with good/meaningful/right, one can ask “Did we really identify the right property?”
Moral naturalists would answer: “That’s a superfluous question. We’ve already determined that the property in question is relevant to things we care about in ways xyz. That’s what we mean when we use moral terminology.”
By contrast, non-naturalists believe that the open question argument has a point. I’d say the same intuition that drives the open question argument against moral naturalism seems to be a core intuition behind your post. The intuition says that the concepts “good/meaningful/right” have a well-specified and action-relevant meaning even though we’re clueless about it and can’t describe the success criteria for having found the answer.
(Some non-naturalists might say is that good/meaningful/right may turn out to be co-extensional with some natural property, but not synonymous. This feels a bit like trying to have the cake and eat it; I’m confused about how to interpret that sort of talk. I can’t think up a good story of how we could come into the epistemic position of understanding that non-naturalist moral concepts are co-extensional with specific naturalist concepts while maintaining that “things could have been otherwise.”)
I don’t think these distinctions are inherently particularly important, but it’s useful to think about whether your brand of moral realism is more likely to fail because of (1) “accommodation charges” (“queerness”) or due to (2) expert moral disagreement / not being able to compellingly demonstrate that a specific natural property is unambiguously the thing everyone (who’s altruistic?) ought to orient their lives towards. (I’d have thought that 2 is more typically associated with moral naturalism, but there seem to be edge cases. For instance, Parfit’s metaethical view strikes me as more susceptible to counterarguments of the second type – see for instance his “climbing the same mountain” anology or his claim that his life’s work would be in vain if he’s wrong about his convergence arguments. While I’ve seen people call Parfit’s view “non-naturalist” in some places, I’ve also heard the term “quietist,” which seems to have the loose meaning of “kind of non-naturalist, but the naturalism vs. non-naturalism is beside the point and my view therefore doesn’t make any strange metaphysical claims.” In any case, it seems to me that Parfit thinks we know a great deal about “good/meaningful/right” and, moreover, that this knowledge is essential for his particular metaethical position, so his brand of moral realism seems strictly different from yours.)
Subjectivist stances feel more intellectually satisfying, IMO. I argue here that a moral ontology (“conceptual option space”) based on subjective life goals fulfills the following criteria:
By contrast, I think non-naturalist views fail some of these criteria. Elsewhere (see links in my previous comment), I’ve argued that moral realism based on irreducible normativity is not worth wanting because it cannot be made to work in anywhere close to the way our intuition about the terminology would indicate (similar to the concept of libertarian free will).
I feel like the disagreement is more about “how to reason” than “how much do we know?.” My main argument against something like UOM is “this contains concepts that aren’t part of my philosophical repertoire.” I argue for this in posts 2 and 3 of my anti-realism sequence.
I have more thoughts but they’re best explained in the sequence.