I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.