If you don’t think you know what the moral reality is, why are you confident that there is one?
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn’t matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.
The naturalist version of “value is a part of the territory” would be that when we introspect about our motivation and the nature of pleasure and so on, we’ll agree that pleasure is what’s valuable.
I don’t see why “introspecting on our motivation and the nature of pleasure and so on” should be what “naturalism” means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say “positive valence” over “pleasure” because laymen would misunderstand the latter.
At this point, hedonists could either concede that there’s no sense in which hedonism is true for everyone – because not everyone agrees.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
Or they can say something like “Well, it may not seem to you that you’re making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions
I’m not sure what these other dispositions are, but I’m thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that “knowledge is terminally good”, for example, I wouldn’t dismiss it entirely, but I don’t see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because “obviously” pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I’d call that valuation nonrealist.)
claiming that “hedonism is correct in some direct, empirical sense” would predict expert convergence.
🤷♂️ Why? When you say “expert”, do you mean “moral realist”? But then, which kind of moral realist? Obviously I’m not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.
Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn’t matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.
I don’t see why “introspecting on our motivation and the nature of pleasure and so on” should be what “naturalism” means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say “positive valence” over “pleasure” because laymen would misunderstand the latter.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m not sure what these other dispositions are, but I’m thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that “knowledge is terminally good”, for example, I wouldn’t dismiss it entirely, but I don’t see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because “obviously” pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I’d call that valuation nonrealist.)
🤷♂️ Why? When you say “expert”, do you mean “moral realist”? But then, which kind of moral realist? Obviously I’m not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.
Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.