my suspicion is that you’d run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of “whatever causes experts to converge their opinions under ideal reasoning conditions.”
In the absence of new scientific discoveries about the territory, I’m not sure whether experts (even “ideal” ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understanding what, in the territory, leads to the moral value of persons and animals? I think we can agree that less factory farming, less meat consumption and fewer abortions are better all else being equal, but in reality we face tradeoffs ― potentially less enjoyable meals (luckily there’s Beyond Meat); children raised by poor single moms who didn’t want children.
I don’t even see how we can conclude that higher populations are better, as EAs often do, for (i) how do we detect what standard of living is better than non-existence, or how much suffering is worse than non-existence, (ii) how do we rule out the possibility that the number of beings does not scale linearly with the number of monadal experiencers, and (iii) we need to balance the presumed goodness of higher population against a higher catastrophic risk of exceeding Earth’s carrying capacity, and (iv) I don’t see how to rule out that things other than valence (of experiences) are morally (terminally) important. Plus, how to value the future is puzzling to me, appealing as longtermism’s linear valuation is.
So while I’m a moral realist, (i) I don’t presume to know what the moral reality actually is, (ii) my moral judgements tend to be provisionary and (iii) I don’t expect to agree on everything with a hypothetical clone of myself who starts from the same two axioms as me (though I expect we’d get along well and agree on many key points). But what everybody in my school of thought should agree on is that scientific approaches to the Hard Problem of Consciousness are important, because we can probably act morally better after it is solved. I think even some approaches that are generally considered morally unacceptable by society today are worth consideration, e.g. destructive experiments on the brains of terminally ill patients who (of course) gave their consent for these experiments. (it doesn’t make sense to do such experiments today though: before experiments take place, plausible hypotheses must be developed that could be falsified by experiment, and presumably any possible useful nondestructive experiments should be done first.)
[Addendum:]
I’m always a bit split whether people who place a lot of weight on qualia in their justification for moral realism are non-naturalists or naturalists.
Why? At the link you said “I’d think she’s saying that pleasure has a property that we recognize as “what we should value” in a way that somehow is still a naturalist concept. I don’t understand that bit.” But by the same token ― if I assume Hewitt talking about “pleasure” is essentially the same thing as me talking about “valence” ― I don’t understand why you seem to think it’s “illegitimate” to suppose valence exists in the territory, or what you think is there instead.
So while I’m a moral realist, (i) I don’t presume to know what the moral reality actually is
If you don’t think you know what the moral reality is, why are you confident that there is one?
I discuss possible answers to this question here and explain why I find all of the unsatisfying.
The only realism-compatible position I find somewhat defensible is something like “It may turn out that morality isn’t a crisp concept in thingspace that gives us answers to all the contested questions (population ethics, comparing human lives to other sentient beings, preferences vs hedonism, etc), but we don’t know yet. It may also turn out that as we learn more about the various options and as more facts about human minds and motivation and so on come to light, there will be a theory that ‘stands out’ as the obvious way of going about altruism/making the world better. Therefore, I’m not yet willing to call myself a confident moral anti-realist.”
That said, I give some arguments in my sequence why we shouldn’t expect any theory to ‘stand out’ like that. I believe these questions will remain difficult forever and competent reasoners will often disagree on their respective favorite answers.
Why? At the link you said “I’d think she’s saying that pleasure has a property that we recognize as “what we should value” in a way that somehow is still a naturalist concept. I don’t understand that bit.” But by the same token ― if I assume Hewitt talking about “pleasure” is essentially the same thing as me talking about “valence” ― I don’t understand why you seem to think it’s “illegitimate” to suppose valence exists in the territory, or what you think is there instead.
This goes back to the same disagreement we’re discussing, the one about expert consensus or lack thereof. The naturalist version of “value is a part of the territory” would be that when we introspect about our motivation and the nature of pleasure and so on, we’ll agree that pleasure is what’s valuable. However, empirically, many people don’t conclude this; they aren’t hedonists. (As I defend in the post, I think they aren’t thereby making any sort of mistake. For instance, it’s simply false that non-hedonist philosophers would categorically be worse at constructing thought experiments to isolate confounding variables for assessing whether we value things other than pleasure only instrumentally. I could totally pass the Ideological Turing test for why some people are hedonists. I just don’t find the view compelling myself.)
At this point, hedonists could either concede that there’s no sense in which hedonism is true for everyone – because not everyone agrees.
Or they can say something like “Well, it may not seem to you that you’re making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions, and you’re missing that, so you ARE making a mistake about normativity, even if you say you don’t care.”
And then we’re back to “How do they know this?” and “What’s the point of ‘normativity’ if it’s disconnected to what I (on reflection) want/what motivates me?” Etc. It’s the same disagreement again. The reason I believe Hewitt and others want to have their cake and eat is because they want to simultaneously (1) downplay the relevance of empirical information about whether sophisticated reasoners find hedonism compelling (2) while still claiming that hedonism is correct in some direct, empirical sense, which makes it “part of the territory.” The tension here is that claiming that “hedonism is correct in some direct, empirical sense” would predict expert convergence.
If you don’t think you know what the moral reality is, why are you confident that there is one?
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn’t matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.
The naturalist version of “value is a part of the territory” would be that when we introspect about our motivation and the nature of pleasure and so on, we’ll agree that pleasure is what’s valuable.
I don’t see why “introspecting on our motivation and the nature of pleasure and so on” should be what “naturalism” means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say “positive valence” over “pleasure” because laymen would misunderstand the latter.
At this point, hedonists could either concede that there’s no sense in which hedonism is true for everyone – because not everyone agrees.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
Or they can say something like “Well, it may not seem to you that you’re making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions
I’m not sure what these other dispositions are, but I’m thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that “knowledge is terminally good”, for example, I wouldn’t dismiss it entirely, but I don’t see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because “obviously” pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I’d call that valuation nonrealist.)
claiming that “hedonism is correct in some direct, empirical sense” would predict expert convergence.
🤷♂️ Why? When you say “expert”, do you mean “moral realist”? But then, which kind of moral realist? Obviously I’m not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.
Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.
In the absence of new scientific discoveries about the territory, I’m not sure whether experts (even “ideal” ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understanding what, in the territory, leads to the moral value of persons and animals? I think we can agree that less factory farming, less meat consumption and fewer abortions are better all else being equal, but in reality we face tradeoffs ― potentially less enjoyable meals (luckily there’s Beyond Meat); children raised by poor single moms who didn’t want children.
I don’t even see how we can conclude that higher populations are better, as EAs often do, for (i) how do we detect what standard of living is better than non-existence, or how much suffering is worse than non-existence, (ii) how do we rule out the possibility that the number of beings does not scale linearly with the number of monadal experiencers, and (iii) we need to balance the presumed goodness of higher population against a higher catastrophic risk of exceeding Earth’s carrying capacity, and (iv) I don’t see how to rule out that things other than valence (of experiences) are morally (terminally) important. Plus, how to value the future is puzzling to me, appealing as longtermism’s linear valuation is.
So while I’m a moral realist, (i) I don’t presume to know what the moral reality actually is, (ii) my moral judgements tend to be provisionary and (iii) I don’t expect to agree on everything with a hypothetical clone of myself who starts from the same two axioms as me (though I expect we’d get along well and agree on many key points). But what everybody in my school of thought should agree on is that scientific approaches to the Hard Problem of Consciousness are important, because we can probably act morally better after it is solved. I think even some approaches that are generally considered morally unacceptable by society today are worth consideration, e.g. destructive experiments on the brains of terminally ill patients who (of course) gave their consent for these experiments. (it doesn’t make sense to do such experiments today though: before experiments take place, plausible hypotheses must be developed that could be falsified by experiment, and presumably any possible useful nondestructive experiments should be done first.)
[Addendum:]
Why? At the link you said “I’d think she’s saying that pleasure has a property that we recognize as “what we should value” in a way that somehow is still a naturalist concept. I don’t understand that bit.” But by the same token ― if I assume Hewitt talking about “pleasure” is essentially the same thing as me talking about “valence” ― I don’t understand why you seem to think it’s “illegitimate” to suppose valence exists in the territory, or what you think is there instead.
If you don’t think you know what the moral reality is, why are you confident that there is one?
I discuss possible answers to this question here and explain why I find all of the unsatisfying.
The only realism-compatible position I find somewhat defensible is something like “It may turn out that morality isn’t a crisp concept in thingspace that gives us answers to all the contested questions (population ethics, comparing human lives to other sentient beings, preferences vs hedonism, etc), but we don’t know yet. It may also turn out that as we learn more about the various options and as more facts about human minds and motivation and so on come to light, there will be a theory that ‘stands out’ as the obvious way of going about altruism/making the world better. Therefore, I’m not yet willing to call myself a confident moral anti-realist.”
That said, I give some arguments in my sequence why we shouldn’t expect any theory to ‘stand out’ like that. I believe these questions will remain difficult forever and competent reasoners will often disagree on their respective favorite answers.
This goes back to the same disagreement we’re discussing, the one about expert consensus or lack thereof. The naturalist version of “value is a part of the territory” would be that when we introspect about our motivation and the nature of pleasure and so on, we’ll agree that pleasure is what’s valuable. However, empirically, many people don’t conclude this; they aren’t hedonists. (As I defend in the post, I think they aren’t thereby making any sort of mistake. For instance, it’s simply false that non-hedonist philosophers would categorically be worse at constructing thought experiments to isolate confounding variables for assessing whether we value things other than pleasure only instrumentally. I could totally pass the Ideological Turing test for why some people are hedonists. I just don’t find the view compelling myself.)
At this point, hedonists could either concede that there’s no sense in which hedonism is true for everyone – because not everyone agrees.
Or they can say something like “Well, it may not seem to you that you’re making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions, and you’re missing that, so you ARE making a mistake about normativity, even if you say you don’t care.”
And then we’re back to “How do they know this?” and “What’s the point of ‘normativity’ if it’s disconnected to what I (on reflection) want/what motivates me?” Etc. It’s the same disagreement again. The reason I believe Hewitt and others want to have their cake and eat is because they want to simultaneously (1) downplay the relevance of empirical information about whether sophisticated reasoners find hedonism compelling (2) while still claiming that hedonism is correct in some direct, empirical sense, which makes it “part of the territory.” The tension here is that claiming that “hedonism is correct in some direct, empirical sense” would predict expert convergence.
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn’t matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.
I don’t see why “introspecting on our motivation and the nature of pleasure and so on” should be what “naturalism” means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say “positive valence” over “pleasure” because laymen would misunderstand the latter.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m not sure what these other dispositions are, but I’m thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that “knowledge is terminally good”, for example, I wouldn’t dismiss it entirely, but I don’t see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because “obviously” pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I’d call that valuation nonrealist.)
🤷♂️ Why? When you say “expert”, do you mean “moral realist”? But then, which kind of moral realist? Obviously I’m not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.
Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.