As a self-identified moral realist, I did not find my own view represented in this post, although perhaps Railton’s naturalist position is the one that comes the closest. I can identify both as an objectivist, a constructivist, and a subjectivist, indeed even a Randian objectivist. It all rests on what the nature of the ill-specified “subject” in question is. If one is an open individualist, then subjectivism and objectivism will, one can argue, collapse into one. According to open individualism, the adoption of Randianism (or, in Sidgwick’s terminology, “rational egoism”) implies that we should do what is best for all sentient beings. In other words, subjectivism without indefensibly demarcated subjects (or at least subjects whose demarcation is not granted unjustifiable metaphysical significance) is equivalent with objectivism. Or so I would argue.
As for Moore’s open question argument (which I realize was not explored in much depth here), it seems to me, as has been pointed out by others, that there can be an ontological identity between that which different words refer to even if these words are not commonly reckoned strictly synonymous. For example: Is water the same as H2O? Is the brain the mind? These questions are hardly meaningless, even if we think the answer to both questions is ‘yes’. Beyond that, one can also defend the view that “the good” is a larger set of which any specific good thing we can point to is merely a subset, and hence the question can also make sense in this way (i.e. it becomes a matter of whether something is part of “the good”).
To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness). [And I think one can fairly argue that to say such a state has “genuine normative force” is very much an understatement.]
“Normative force for the experiencing subject or for all agents?” one may then ask. Yet on my account of personal identity, the open individualist account (cf. https://en.wikipedia.org/wiki/Open_individualism and https://www.smashwords.com/books/view/719903), there is no fundamental distinction, and thus my answer would simply be: yes, for the experiencing subject, and hence for all agents (this is where our intuitions scream, of course, unless we are willing to suspend our strong, Darwinianly adaptive sense of self as some entity that rides around in some small part of physical reality).
One may then object that different agents occupy genuinely different coordinates in spacetime, yet the same can be said of what we usually consider the same agent. So there is really no fundamental difference here: If we say that it is genuinely normative for Tim at t1 (or simply Tim1) to ensure that Tim at t2 (or simply Tim2) suffers less, then why wouldn’t the same be true of Tim1 with respect to John1, 2, 3…?
With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its “realism” (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).
To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness).
Cool! I think the closest I’ll come to discussing this view is in footnote 18. I plan to have a post on moral realism via introspection about the intrinsic goodness (or badness) of certain conscious states.
I agree with reductionism about personal identity and I also find this to be one of the most persuasive arguments in favor of altruistic life goals. I would not call myself an open indvidualist though because I’m not sure what the position is exactly saying. For instance, I don’t understand how it differs from empty individualism. I’d understand if these are different framings or different metaphores, but if we assume that we’re talking about positions that can be true or false, I don’t understand what we’re arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism. Also, I think it’s perfectly coherent to have egoistic goals even under a reductionist view of personal identity. (It just turns out that egoism is not a well-defined concept either, and one has to make some judgment calls if one ever expects to encounter edge-cases for which our intuitions give no obvious answers about whether something is still “me.”)
With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its “realism” (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).
Yeah, fair point. I mean, even Railton’s own view has plenty of practical relevance in the sense that it highlights that certain societal arrangements lead to more overall well-being or life satisfaction than others. (That’s also a point that Sam Harris makes.) But if that’s all we mean by “moral realism” then it would be rather trivial. Maybe my criteria are a bit too strict, and I would indeed already regard it as extremely surprising if you get something like One Compelling Axiology that agrees on population ethics while leaving a few other things underdetermined.
For instance, I don’t understand how [open individualism] differs from empty individualism. I’d understand if these are different framings or different metaphores, but if we assume that we’re talking about positions that can be true or false, I don’t understand what we’re arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.
I agree completely. I identify equally as an open and empty individualist. As I’ve written elsewhere (in You Are Them): “I think these ‘positions’ are really just two different ways of expressing the same truth. They merely define the label of ‘same person’ in different ways.”
Also, I think it’s perfectly coherent to have egoistic goals even under a reductionist view of personal identity.
I guess it depends on what those egoistic goals are. The fact that some egoistic goals are highly instrumentally useful for the benefit of others (even if one doesn’t intend to benefit others, cf. Smith’s invisible hand, the deep wisdom of Ayn Rand, and also, more generally, the fact that many of our selfish desires probably shouldn’t be expected to be that detrimental to others, or at least our in-group, given that we evolved as social creatures) is, I think, a confounding factor that makes it seem plausible to say that pursuing them is coherent/non-problematic (in light of a reductionist view of personal identity). Yet if it is transparent that the pursuit of these egoistic goals comes at the cost of many other beings’ intense suffering, I think we would be reluctant to say that pursuing them is “perfectly coherent” (especially in light of such a view of personal identity, yet many would probably even say it regardless; one can, for example, also argue it is incoherent with reference to inconsistency: “we should not treat the same/sufficiently similar entities differently”).
For instance, would we, with this view of personal identity, really claim that it is “perfectly coherent” to choose to push button A: “you get a brand new pair of shorts”, when we could have pushed button B: “You prevent 100 years of torture (for someone else in one sense, yet for yourself in another, quite real sense) which will not be prevented if you push button A”. It seems much more plausible to deem it perfectly coherent to have a selfish desire to start a company or to signal coolness or otherwise gain personal satisfaction by being an effective altruist.
But if that’s all we mean by “moral realism” then it would be rather trivial.
I don’t quite understand why you would call this trivial. Perhaps it is trivial that many of us, perhaps even the vast majority, agree. Yet, as mentioned, the acceptance of a principle like “avoid causing unnecessary suffering” is extremely significant in terms of its practical implications; many have argued that it implies the adoption of veganism (where the effects on wildlife as a potential confounding factor is often disregarded, of course), and one could even employ it to argue against space colonization (depending on what we hold to constitute necessity). So, in terms of practical consequences at least, I’m almost tempted to say that it could barely be more significant. And it’s not clear to me that agreement on a highly detailed axiology would necessarily have significantly more significant, or even more clear, implications than what we could get off the ground from quite crude principles (it seems to me there may well be strong diminishing returns here, if you will, as you can also seem to agree weakly with in light of the final sentence of your reply). Also because the large range of error produced by empirical uncertainty may, on consequentialist views at least, make the difference in practice between realizing a detailed and a crude axiology a lot less clear than the difference between the two axiologies at the purely theoretical level—perhaps even so much so as to make it virtually vanish in many cases.
Maybe my criteria are a bit too strict [...]
I’m just wondering: too strict for what purpose?
This may seem a bit disconnected, but I just wanted to share an analogy I just came to think of: Imagine mathematics were a rather different field where we only agreed about simple arithmetic such as 2 + 2 = 4, and where everything beyond that were like the Riemann hypothesis: there is no consensus, and clear answers appear beyond our grasp. Would we then say that our recognition that 2 + 2 = 4 holds true, at least in some sense (given intuitive axioms, say), is trivial with respect to asserting some form of mathematical realism? And would finding widely agreed-upon solutions to our harder problems constitute a significant step toward deciding whether we should accept such a realism? I fail to see how it would.
Empty individualism is quite different from open individualism. Empty individualism says that you only exist during the present fraction of a second. This leads to the conclusion that no matter what action you take, the amount of pain or pleasure you will experience as a consequence thereof will remain zero. This therefore leads to nihilism. Open Individualism on the other hand says that you will be repeatedly reincarnated as every human that will ever live. In the words of David Pearce: “If open individualism is true, then the distinction between decision-theoretic rationality and morality (arguably) collapses. An intelligent sociopath would do the same as an intelligent saint; it’s all about me.” This means that the egoistic sociopaths would change themselves into altruists of sorts.
The only way I know of in which empty individualism can lead towards open individualism works as follows: When choosing which action to take, one should select the action which leads to the least amount of suffering for oneself. If there were a high probability that empty individualism is true and a very small but non-zero probability that open individualism is true, one would still have to take the action dictated by open individualism because empty individualism stays neutral with regads to which action to take, thereby making itself irrelevant.
Note however that empty individualism vs open individualism is a false dichotomy as there are other contenders such as closed individualism which is the common-sensical view, at least here in the West. So since empty individualism makes itself irrelevant, at least for now the contention is just between open individualism and closed individualism. It would in principle certainly be possible to calculate whether open individualism or closed individualism is more likely to be true. Furthermore, it would be possible to calculate whether AGI would be open individualist towards humanity or not. To conduct such a caclulation successfully before the singularity would however require a collaboration between many theoreticians.
Thanks for writing this, Lukas. :-)
As a self-identified moral realist, I did not find my own view represented in this post, although perhaps Railton’s naturalist position is the one that comes the closest. I can identify both as an objectivist, a constructivist, and a subjectivist, indeed even a Randian objectivist. It all rests on what the nature of the ill-specified “subject” in question is. If one is an open individualist, then subjectivism and objectivism will, one can argue, collapse into one. According to open individualism, the adoption of Randianism (or, in Sidgwick’s terminology, “rational egoism”) implies that we should do what is best for all sentient beings. In other words, subjectivism without indefensibly demarcated subjects (or at least subjects whose demarcation is not granted unjustifiable metaphysical significance) is equivalent with objectivism. Or so I would argue.
As for Moore’s open question argument (which I realize was not explored in much depth here), it seems to me, as has been pointed out by others, that there can be an ontological identity between that which different words refer to even if these words are not commonly reckoned strictly synonymous. For example: Is water the same as H2O? Is the brain the mind? These questions are hardly meaningless, even if we think the answer to both questions is ‘yes’. Beyond that, one can also defend the view that “the good” is a larger set of which any specific good thing we can point to is merely a subset, and hence the question can also make sense in this way (i.e. it becomes a matter of whether something is part of “the good”).
To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness). [And I think one can fairly argue that to say such a state has “genuine normative force” is very much an understatement.]
“Normative force for the experiencing subject or for all agents?” one may then ask. Yet on my account of personal identity, the open individualist account (cf. https://en.wikipedia.org/wiki/Open_individualism and https://www.smashwords.com/books/view/719903), there is no fundamental distinction, and thus my answer would simply be: yes, for the experiencing subject, and hence for all agents (this is where our intuitions scream, of course, unless we are willing to suspend our strong, Darwinianly adaptive sense of self as some entity that rides around in some small part of physical reality). One may then object that different agents occupy genuinely different coordinates in spacetime, yet the same can be said of what we usually consider the same agent. So there is really no fundamental difference here: If we say that it is genuinely normative for Tim at t1 (or simply Tim1) to ensure that Tim at t2 (or simply Tim2) suffers less, then why wouldn’t the same be true of Tim1 with respect to John1, 2, 3…?
With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its “realism” (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).
Cool! I think the closest I’ll come to discussing this view is in footnote 18. I plan to have a post on moral realism via introspection about the intrinsic goodness (or badness) of certain conscious states.
I agree with reductionism about personal identity and I also find this to be one of the most persuasive arguments in favor of altruistic life goals. I would not call myself an open indvidualist though because I’m not sure what the position is exactly saying. For instance, I don’t understand how it differs from empty individualism. I’d understand if these are different framings or different metaphores, but if we assume that we’re talking about positions that can be true or false, I don’t understand what we’re arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.
Also, I think it’s perfectly coherent to have egoistic goals even under a reductionist view of personal identity. (It just turns out that egoism is not a well-defined concept either, and one has to make some judgment calls if one ever expects to encounter edge-cases for which our intuitions give no obvious answers about whether something is still “me.”)
Yeah, fair point. I mean, even Railton’s own view has plenty of practical relevance in the sense that it highlights that certain societal arrangements lead to more overall well-being or life satisfaction than others. (That’s also a point that Sam Harris makes.) But if that’s all we mean by “moral realism” then it would be rather trivial. Maybe my criteria are a bit too strict, and I would indeed already regard it as extremely surprising if you get something like One Compelling Axiology that agrees on population ethics while leaving a few other things underdetermined.
Thanks for your reply :-)
I agree completely. I identify equally as an open and empty individualist. As I’ve written elsewhere (in You Are Them): “I think these ‘positions’ are really just two different ways of expressing the same truth. They merely define the label of ‘same person’ in different ways.”
I guess it depends on what those egoistic goals are. The fact that some egoistic goals are highly instrumentally useful for the benefit of others (even if one doesn’t intend to benefit others, cf. Smith’s invisible hand, the deep wisdom of Ayn Rand, and also, more generally, the fact that many of our selfish desires probably shouldn’t be expected to be that detrimental to others, or at least our in-group, given that we evolved as social creatures) is, I think, a confounding factor that makes it seem plausible to say that pursuing them is coherent/non-problematic (in light of a reductionist view of personal identity). Yet if it is transparent that the pursuit of these egoistic goals comes at the cost of many other beings’ intense suffering, I think we would be reluctant to say that pursuing them is “perfectly coherent” (especially in light of such a view of personal identity, yet many would probably even say it regardless; one can, for example, also argue it is incoherent with reference to inconsistency: “we should not treat the same/sufficiently similar entities differently”). For instance, would we, with this view of personal identity, really claim that it is “perfectly coherent” to choose to push button A: “you get a brand new pair of shorts”, when we could have pushed button B: “You prevent 100 years of torture (for someone else in one sense, yet for yourself in another, quite real sense) which will not be prevented if you push button A”. It seems much more plausible to deem it perfectly coherent to have a selfish desire to start a company or to signal coolness or otherwise gain personal satisfaction by being an effective altruist.
I don’t quite understand why you would call this trivial. Perhaps it is trivial that many of us, perhaps even the vast majority, agree. Yet, as mentioned, the acceptance of a principle like “avoid causing unnecessary suffering” is extremely significant in terms of its practical implications; many have argued that it implies the adoption of veganism (where the effects on wildlife as a potential confounding factor is often disregarded, of course), and one could even employ it to argue against space colonization (depending on what we hold to constitute necessity). So, in terms of practical consequences at least, I’m almost tempted to say that it could barely be more significant. And it’s not clear to me that agreement on a highly detailed axiology would necessarily have significantly more significant, or even more clear, implications than what we could get off the ground from quite crude principles (it seems to me there may well be strong diminishing returns here, if you will, as you can also seem to agree weakly with in light of the final sentence of your reply). Also because the large range of error produced by empirical uncertainty may, on consequentialist views at least, make the difference in practice between realizing a detailed and a crude axiology a lot less clear than the difference between the two axiologies at the purely theoretical level—perhaps even so much so as to make it virtually vanish in many cases.
I’m just wondering: too strict for what purpose?
This may seem a bit disconnected, but I just wanted to share an analogy I just came to think of: Imagine mathematics were a rather different field where we only agreed about simple arithmetic such as 2 + 2 = 4, and where everything beyond that were like the Riemann hypothesis: there is no consensus, and clear answers appear beyond our grasp. Would we then say that our recognition that 2 + 2 = 4 holds true, at least in some sense (given intuitive axioms, say), is trivial with respect to asserting some form of mathematical realism? And would finding widely agreed-upon solutions to our harder problems constitute a significant step toward deciding whether we should accept such a realism? I fail to see how it would.
Empty individualism is quite different from open individualism.
Empty individualism says that you only exist during the present fraction of a second. This leads to the conclusion that no matter what action you take, the amount of pain or pleasure you will experience as a consequence thereof will remain zero. This therefore leads to nihilism.
Open Individualism on the other hand says that you will be repeatedly reincarnated as every human that will ever live. In the words of David Pearce: “If open individualism is true, then the distinction between decision-theoretic rationality and morality (arguably) collapses. An intelligent sociopath would do the same as an intelligent saint; it’s all about me.”
This means that the egoistic sociopaths would change themselves into altruists of sorts.
The only way I know of in which empty individualism can lead towards open individualism works as follows:
When choosing which action to take, one should select the action which leads to the least amount of suffering for oneself. If there were a high probability that empty individualism is true and a very small but non-zero probability that open individualism is true, one would still have to take the action dictated by open individualism because empty individualism stays neutral with regads to which action to take, thereby making itself irrelevant.
Note however that empty individualism vs open individualism is a false dichotomy as there are other contenders such as closed individualism which is the common-sensical view, at least here in the West. So since empty individualism makes itself irrelevant, at least for now the contention is just between open individualism and closed individualism. It would in principle certainly be possible to calculate whether open individualism or closed individualism is more likely to be true. Furthermore, it would be possible to calculate whether AGI would be open individualist towards humanity or not. To conduct such a caclulation successfully before the singularity would however require a collaboration between many theoreticians.