Moral realism is based on the word “real”, yet I don’t see anything I would describe as “real” (in the territory-vs-map sense) in Philippa Foot or Peter Railton’s forms of “realism”. [...]
At the same time, you mention in a reply that “anti-realism says there is no such thing” as “one true morality” which is consistent with my intuition of what anti-realism seems like it should mean ― that morality is fundamentally grounded in personal taste. But then, Foot and Railton’s accounts also seem grounded in their personal tastes.
Yeah, that’s why I also point out that I don’t consider Foot’s or Railton’s account worthy of the name “moral realism.” Even though they’ve been introduced and discussed that way.
So I would like to ask how you would classify my own account of “moral realism worthy of the name” as something that must ultimately be grounded in territory rather than map.
I think it’s surprisingly difficult to spell out what it would mean for morality to be grounded in the territory. My “One Compelling Axiology” version of moral realism constitutes my best effort at operationalizing what it would mean. Because if morality is grounded in the territory, that should be the cause for ideal reasoners to agree on the exact nature and shape of morality.
At this point of the argument, philosophers of a particular school tend to object and say something like the following:
“It’s not about what human reasoners think or whether there’s convergence of their moral views as they become more sophisticated and better studied. Instead, it’s about what’s actually true! It could be that there’s a true morality, but all human reasoners (even the best ones) are wrong about it.”
But that sort of argument begs the question. What does it mean for something to be true if we could all be wrong about it even under ideal reasoning conditions? That’s the part I don’t understand. So, when I steelman moral realism, I assume that we’re actually in a position to find out the moral truth. (At least that this is possible in theory, under the best imaginable circumstances.)
Someone could object that convergence arguments [convergence arguments are a type of argument in favor of moral realism; they say that moral realism is true if sophisticated reasoners tend to converge in their moral views as they approach ideal reasoning conditions] are never strong enough to establish moral realism with high confidence. Firstly (1), what counts as “philosophically sophisticated reasoners” or “idealized reasoning conditions” is under-defined. Arguably, subtle differences to these stipulations could influence whether convergence arguments work out. Secondly (2), even conditional on expert convergence, we couldn’t be sure whether it reflects the existence of a speaker-independent moral reality. Instead, it could mean that our philosophically sophisticated reasoners happen to have the same subjective values. Thirdly (3), what reasoners consider self-evident may change over time. Wouldn’t sophisticated reasoners born in (e.g.) the 17th century disagree with what we consider self-evident today? Those are forceful objections. If we only applied the most stringent criteria for what counts as “moral realism,” we’d arguably be left with moral non-naturalism (“irreducible normativity”). After all, the only reason some philosophers consider non-naturalism (with its strange metaphysical postulates) palatable is because they find moral naturalism too watered down as an alternative. Still, I would consider convergence among a pre-selected set of expert reasoners both relevant and surprising. Therefore, I’m inclined to consider naturalist moral realism an intelligible hypothesis. I think it’s false, but I could imagine situations where I’d change my mind. Here are some quick answers to the objections above: (1) We can imagine circumstances where the convergence isn’t sensitive to the specifics; naturalist moral realism is meant to apply at least under those circumstances. (2) Without the concept of “irreducible normativity,” any answers in philosophy will be subjective in some sense of the word (they have to somehow appeal to our reasoning styles). Still, convergence arguments would establish that there are for-us relevant insights at the end of moral reflection, and that the destination is the same for everyone! (3) When I talk about “morality,” I already have in mind some implicit connotations that the concept has to fulfill. Specifically, I consider it an essential ingredient to morality to take an “impartial stance” of some sort. To the degree that past reasoners didn’t do this, I’d argue that they were answering a different question. (When I investigate whether moral realism is true, I’m not interested in whether everyone who ever used the word “morality” was talking about the exact same thing!) Among past philosophers who saw morality as impartial altruism, we actually find a surprising degree of moral foresight. Jeremy Bentham’s Wikipedia article reads as follows: “He advocated individual and economic freedoms, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and (in an unpublished essay) the decriminalising of homosexual acts. He called for the abolition of slavery, capital punishment and physical punishment, including that of children. He has also become known as an early advocate of animal rights.” To get a sense for the clarity and moral thrust of Bentham’s reasoning, see also this now-famous quote: “The day may come when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognised that the number of the legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps the faculty of discourse? But a fullgrown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose they were otherwise, what would it avail? The question is not, Can they reason? nor Can they talk? but, Can they suffer?”
In the above endnote, I try to defend why I think my description of the One Compelling Axiology version of moral realism is a good steelman, despite some moral realists not liking it because I don’t allow for the possibility that moral reality is forever unknowable to even the best human reasoners under ideal reasoning conditions.
This part reads to me as if you’d been asked “what would change your mind” and you responded “realistically, nothing.” But then, my background involves banging my head against the wall with climate dismissives, so I have a visceral understanding that “science advances one funeral at a time” as Max Planck said. So my next thought, more charitably, is “well, maybe Lukas will make his judgement from the perspective of an imagined future where all necessary funerals have already taken place.”
Definitely! I’m assuming “ideal reasoning conditions” – a super high bar, totally unrealistic in reality. For the sort of thing I’m envisioning, see my post, The Moral Uncertainty Rabbit Hole, Fully Excavated. Here’s a quote from the section on “reflection procedures”:
Here’s one example of a reflection environment:
My favorite thinking environment: Imagine a comfortable environment tailored for creative intellectual pursuits (e.g., a Google campus or a cozy mansion on a scenic lake in the forest). At your disposal, you find a well-intentioned, superintelligent AI advisor fluent in various schools of philosophy and programmed to advise in a value-neutral fashion. (Insofar as that’s possible – since one cannot do philosophy without a specific methodology, the advisor must already endorse certain metaphilosophical commitments.) Besides answering questions, they can help set up experiments in virtual reality, such as ones with emulations of your brain or with modeled copies of your younger self. For instance, you can design experiments for learning what you’d value if you first encountered the EA community in San Francisco rather than in Oxford or started reading Derek Parfit or Peter Singer after the blog Lesswrong, instead of the other way around.[2] You can simulate conversations with select people (e.g., famous historical figures or contemporary philosophers). You can study how other people’s reflection concludes and how their moral views depend on their life circumstances. In the virtual-reality environment, you can augment your copy’s cognition or alter its perceptions to have it experience new types of emotions. You can test yourself for biases by simulating life as someone born with another gender(-orientation), ethnicity, or into a family with a different socioeconomic status. At the end of an experiment, your (near-)copies can produce write-ups of their insights, giving you inputs for your final moral deliberations. You can hand over authority about choosing your values to one of the simulated (near-)copies (if you trust the experimental setup and consider it too difficult to convey particular insights or experiences via text). Eventually, the person with the designated authority has to provide to your AI assistant a precise specification of values (the format – e.g., whether it’s a utility function or something else – is up to you to decide on). Those values then serve as your idealized values after moral reflection.
(Two other, more rigorously specified reflection procedures are indirect normativity and HCH.[3] Indirect normativity outputs a utility function whereas HCH attempts to formalize “idealized judgment,” which we could then consult for all kinds of tasks or situations.)[4]
“My favorite thinking environment” leaves you in charge as much as possible while providing flexible assistance. Any other structure is for you to specify: you decide the reflection strategy.[5] This includes what questions to ask the AI assistant, what experiments to do (if any), and when to conclude the reflection.
Part of the point of that quote is that there’s some subjectivity about how to set up “ideal reasoning conditions” – but we can still agree that, for practical purposes, something like the above constitutes better reasoning conditions than what we have available today. And if the best reasoners in EA (for example) or some other context where people start out with good epistemics all tended to converge after that sort of reflection, I’d consider that strong evidence for (naturalist) moral realism, the way I prefer to define it. (But some philosophers would reject this steelman of moral realism and say that my position is always, by definition, anti-realism, no matter what we might discover in the future about expert convergence, because they only want to reason about morality with “irreducible” concepts, i.e., non-naturalist moral realism.)
Please have a look at this summary of my views on Twitter. My question is, how does this view fit into the philosophy landscape?
Definitely seems realist. I’m always a bit split whether people who place a lot of weight on qualia in their justification for moral realism are non-naturalists or naturalists. They often embrace moral naturalism themselves, but there’s a sense in which I think that’s illegitimately trying to have their cake and eat it. See this comment discussion here below my post on hedonist moral realism and qualia-inspired views.
I haven’t found the time to look through your summary of views on Twitter in much detail, but my suspicion is that you’d run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of “whatever causes experts to converge their opinions under ideal reasoning conditions.”
my suspicion is that you’d run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of “whatever causes experts to converge their opinions under ideal reasoning conditions.”
In the absence of new scientific discoveries about the territory, I’m not sure whether experts (even “ideal” ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understanding what, in the territory, leads to the moral value of persons and animals? I think we can agree that less factory farming, less meat consumption and fewer abortions are better all else being equal, but in reality we face tradeoffs ― potentially less enjoyable meals (luckily there’s Beyond Meat); children raised by poor single moms who didn’t want children.
I don’t even see how we can conclude that higher populations are better, as EAs often do, for (i) how do we detect what standard of living is better than non-existence, or how much suffering is worse than non-existence, (ii) how do we rule out the possibility that the number of beings does not scale linearly with the number of monadal experiencers, and (iii) we need to balance the presumed goodness of higher population against a higher catastrophic risk of exceeding Earth’s carrying capacity, and (iv) I don’t see how to rule out that things other than valence (of experiences) are morally (terminally) important. Plus, how to value the future is puzzling to me, appealing as longtermism’s linear valuation is.
So while I’m a moral realist, (i) I don’t presume to know what the moral reality actually is, (ii) my moral judgements tend to be provisionary and (iii) I don’t expect to agree on everything with a hypothetical clone of myself who starts from the same two axioms as me (though I expect we’d get along well and agree on many key points). But what everybody in my school of thought should agree on is that scientific approaches to the Hard Problem of Consciousness are important, because we can probably act morally better after it is solved. I think even some approaches that are generally considered morally unacceptable by society today are worth consideration, e.g. destructive experiments on the brains of terminally ill patients who (of course) gave their consent for these experiments. (it doesn’t make sense to do such experiments today though: before experiments take place, plausible hypotheses must be developed that could be falsified by experiment, and presumably any possible useful nondestructive experiments should be done first.)
[Addendum:]
I’m always a bit split whether people who place a lot of weight on qualia in their justification for moral realism are non-naturalists or naturalists.
Why? At the link you said “I’d think she’s saying that pleasure has a property that we recognize as “what we should value” in a way that somehow is still a naturalist concept. I don’t understand that bit.” But by the same token ― if I assume Hewitt talking about “pleasure” is essentially the same thing as me talking about “valence” ― I don’t understand why you seem to think it’s “illegitimate” to suppose valence exists in the territory, or what you think is there instead.
So while I’m a moral realist, (i) I don’t presume to know what the moral reality actually is
If you don’t think you know what the moral reality is, why are you confident that there is one?
I discuss possible answers to this question here and explain why I find all of the unsatisfying.
The only realism-compatible position I find somewhat defensible is something like “It may turn out that morality isn’t a crisp concept in thingspace that gives us answers to all the contested questions (population ethics, comparing human lives to other sentient beings, preferences vs hedonism, etc), but we don’t know yet. It may also turn out that as we learn more about the various options and as more facts about human minds and motivation and so on come to light, there will be a theory that ‘stands out’ as the obvious way of going about altruism/making the world better. Therefore, I’m not yet willing to call myself a confident moral anti-realist.”
That said, I give some arguments in my sequence why we shouldn’t expect any theory to ‘stand out’ like that. I believe these questions will remain difficult forever and competent reasoners will often disagree on their respective favorite answers.
Why? At the link you said “I’d think she’s saying that pleasure has a property that we recognize as “what we should value” in a way that somehow is still a naturalist concept. I don’t understand that bit.” But by the same token ― if I assume Hewitt talking about “pleasure” is essentially the same thing as me talking about “valence” ― I don’t understand why you seem to think it’s “illegitimate” to suppose valence exists in the territory, or what you think is there instead.
This goes back to the same disagreement we’re discussing, the one about expert consensus or lack thereof. The naturalist version of “value is a part of the territory” would be that when we introspect about our motivation and the nature of pleasure and so on, we’ll agree that pleasure is what’s valuable. However, empirically, many people don’t conclude this; they aren’t hedonists. (As I defend in the post, I think they aren’t thereby making any sort of mistake. For instance, it’s simply false that non-hedonist philosophers would categorically be worse at constructing thought experiments to isolate confounding variables for assessing whether we value things other than pleasure only instrumentally. I could totally pass the Ideological Turing test for why some people are hedonists. I just don’t find the view compelling myself.)
At this point, hedonists could either concede that there’s no sense in which hedonism is true for everyone – because not everyone agrees.
Or they can say something like “Well, it may not seem to you that you’re making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions, and you’re missing that, so you ARE making a mistake about normativity, even if you say you don’t care.”
And then we’re back to “How do they know this?” and “What’s the point of ‘normativity’ if it’s disconnected to what I (on reflection) want/what motivates me?” Etc. It’s the same disagreement again. The reason I believe Hewitt and others want to have their cake and eat is because they want to simultaneously (1) downplay the relevance of empirical information about whether sophisticated reasoners find hedonism compelling (2) while still claiming that hedonism is correct in some direct, empirical sense, which makes it “part of the territory.” The tension here is that claiming that “hedonism is correct in some direct, empirical sense” would predict expert convergence.
If you don’t think you know what the moral reality is, why are you confident that there is one?
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn’t matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.
The naturalist version of “value is a part of the territory” would be that when we introspect about our motivation and the nature of pleasure and so on, we’ll agree that pleasure is what’s valuable.
I don’t see why “introspecting on our motivation and the nature of pleasure and so on” should be what “naturalism” means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say “positive valence” over “pleasure” because laymen would misunderstand the latter.
At this point, hedonists could either concede that there’s no sense in which hedonism is true for everyone – because not everyone agrees.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
Or they can say something like “Well, it may not seem to you that you’re making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions
I’m not sure what these other dispositions are, but I’m thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that “knowledge is terminally good”, for example, I wouldn’t dismiss it entirely, but I don’t see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because “obviously” pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I’d call that valuation nonrealist.)
claiming that “hedonism is correct in some direct, empirical sense” would predict expert convergence.
🤷♂️ Why? When you say “expert”, do you mean “moral realist”? But then, which kind of moral realist? Obviously I’m not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.
Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.
Yeah, that’s why I also point out that I don’t consider Foot’s or Railton’s account worthy of the name “moral realism.” Even though they’ve been introduced and discussed that way.
I think it’s surprisingly difficult to spell out what it would mean for morality to be grounded in the territory. My “One Compelling Axiology” version of moral realism constitutes my best effort at operationalizing what it would mean. Because if morality is grounded in the territory, that should be the cause for ideal reasoners to agree on the exact nature and shape of morality.
At this point of the argument, philosophers of a particular school tend to object and say something like the following:
“It’s not about what human reasoners think or whether there’s convergence of their moral views as they become more sophisticated and better studied. Instead, it’s about what’s actually true! It could be that there’s a true morality, but all human reasoners (even the best ones) are wrong about it.”
But that sort of argument begs the question. What does it mean for something to be true if we could all be wrong about it even under ideal reasoning conditions? That’s the part I don’t understand. So, when I steelman moral realism, I assume that we’re actually in a position to find out the moral truth. (At least that this is possible in theory, under the best imaginable circumstances.)
There’s an endnote in a later post in my series that’s quite relevant to this discussion. The post is Moral uncertainty and moral realism are in tension, and I’ll quote the endnote here:
In the above endnote, I try to defend why I think my description of the One Compelling Axiology version of moral realism is a good steelman, despite some moral realists not liking it because I don’t allow for the possibility that moral reality is forever unknowable to even the best human reasoners under ideal reasoning conditions.
Definitely! I’m assuming “ideal reasoning conditions” – a super high bar, totally unrealistic in reality. For the sort of thing I’m envisioning, see my post, The Moral Uncertainty Rabbit Hole, Fully Excavated. Here’s a quote from the section on “reflection procedures”:
Part of the point of that quote is that there’s some subjectivity about how to set up “ideal reasoning conditions” – but we can still agree that, for practical purposes, something like the above constitutes better reasoning conditions than what we have available today. And if the best reasoners in EA (for example) or some other context where people start out with good epistemics all tended to converge after that sort of reflection, I’d consider that strong evidence for (naturalist) moral realism, the way I prefer to define it. (But some philosophers would reject this steelman of moral realism and say that my position is always, by definition, anti-realism, no matter what we might discover in the future about expert convergence, because they only want to reason about morality with “irreducible” concepts, i.e., non-naturalist moral realism.)
Definitely seems realist. I’m always a bit split whether people who place a lot of weight on qualia in their justification for moral realism are non-naturalists or naturalists. They often embrace moral naturalism themselves, but there’s a sense in which I think that’s illegitimately trying to have their cake and eat it. See this comment discussion here below my post on hedonist moral realism and qualia-inspired views.
I haven’t found the time to look through your summary of views on Twitter in much detail, but my suspicion is that you’d run into difficulties defining what it means for morality to be real/part of the territory and also have that be defined independently of “whatever causes experts to converge their opinions under ideal reasoning conditions.”
In the absence of new scientific discoveries about the territory, I’m not sure whether experts (even “ideal” ones) should converge, given that an absence of evidence tends to allow room for personal taste. For example, can we converge on the morality of abortion, or of factory farms, without understanding what, in the territory, leads to the moral value of persons and animals? I think we can agree that less factory farming, less meat consumption and fewer abortions are better all else being equal, but in reality we face tradeoffs ― potentially less enjoyable meals (luckily there’s Beyond Meat); children raised by poor single moms who didn’t want children.
I don’t even see how we can conclude that higher populations are better, as EAs often do, for (i) how do we detect what standard of living is better than non-existence, or how much suffering is worse than non-existence, (ii) how do we rule out the possibility that the number of beings does not scale linearly with the number of monadal experiencers, and (iii) we need to balance the presumed goodness of higher population against a higher catastrophic risk of exceeding Earth’s carrying capacity, and (iv) I don’t see how to rule out that things other than valence (of experiences) are morally (terminally) important. Plus, how to value the future is puzzling to me, appealing as longtermism’s linear valuation is.
So while I’m a moral realist, (i) I don’t presume to know what the moral reality actually is, (ii) my moral judgements tend to be provisionary and (iii) I don’t expect to agree on everything with a hypothetical clone of myself who starts from the same two axioms as me (though I expect we’d get along well and agree on many key points). But what everybody in my school of thought should agree on is that scientific approaches to the Hard Problem of Consciousness are important, because we can probably act morally better after it is solved. I think even some approaches that are generally considered morally unacceptable by society today are worth consideration, e.g. destructive experiments on the brains of terminally ill patients who (of course) gave their consent for these experiments. (it doesn’t make sense to do such experiments today though: before experiments take place, plausible hypotheses must be developed that could be falsified by experiment, and presumably any possible useful nondestructive experiments should be done first.)
[Addendum:]
Why? At the link you said “I’d think she’s saying that pleasure has a property that we recognize as “what we should value” in a way that somehow is still a naturalist concept. I don’t understand that bit.” But by the same token ― if I assume Hewitt talking about “pleasure” is essentially the same thing as me talking about “valence” ― I don’t understand why you seem to think it’s “illegitimate” to suppose valence exists in the territory, or what you think is there instead.
If you don’t think you know what the moral reality is, why are you confident that there is one?
I discuss possible answers to this question here and explain why I find all of the unsatisfying.
The only realism-compatible position I find somewhat defensible is something like “It may turn out that morality isn’t a crisp concept in thingspace that gives us answers to all the contested questions (population ethics, comparing human lives to other sentient beings, preferences vs hedonism, etc), but we don’t know yet. It may also turn out that as we learn more about the various options and as more facts about human minds and motivation and so on come to light, there will be a theory that ‘stands out’ as the obvious way of going about altruism/making the world better. Therefore, I’m not yet willing to call myself a confident moral anti-realist.”
That said, I give some arguments in my sequence why we shouldn’t expect any theory to ‘stand out’ like that. I believe these questions will remain difficult forever and competent reasoners will often disagree on their respective favorite answers.
This goes back to the same disagreement we’re discussing, the one about expert consensus or lack thereof. The naturalist version of “value is a part of the territory” would be that when we introspect about our motivation and the nature of pleasure and so on, we’ll agree that pleasure is what’s valuable. However, empirically, many people don’t conclude this; they aren’t hedonists. (As I defend in the post, I think they aren’t thereby making any sort of mistake. For instance, it’s simply false that non-hedonist philosophers would categorically be worse at constructing thought experiments to isolate confounding variables for assessing whether we value things other than pleasure only instrumentally. I could totally pass the Ideological Turing test for why some people are hedonists. I just don’t find the view compelling myself.)
At this point, hedonists could either concede that there’s no sense in which hedonism is true for everyone – because not everyone agrees.
Or they can say something like “Well, it may not seem to you that you’re making a mistake of reasoning, but pleasure has this property that it is GOOD in a normative sense irreducible to any of your other dispositions, and you’re missing that, so you ARE making a mistake about normativity, even if you say you don’t care.”
And then we’re back to “How do they know this?” and “What’s the point of ‘normativity’ if it’s disconnected to what I (on reflection) want/what motivates me?” Etc. It’s the same disagreement again. The reason I believe Hewitt and others want to have their cake and eat is because they want to simultaneously (1) downplay the relevance of empirical information about whether sophisticated reasoners find hedonism compelling (2) while still claiming that hedonism is correct in some direct, empirical sense, which makes it “part of the territory.” The tension here is that claiming that “hedonism is correct in some direct, empirical sense” would predict expert convergence.
I am confident that if there is no territory relevant to morality, then illusionism is true and (paradoxically) it doesn’t matter what our maps contain because the brains that contain the maps do not correlate with any experiences in base reality. I therefore ignore illusionism and proceed with the assumption that there is something real, that it is linked to brains and correlates positively with mental experience, that it is scientifically discoverable, and that prior to such a discovery we can derive reasonable models of morality grounded in our current body of scientific/empirical information.
I don’t see why “introspecting on our motivation and the nature of pleasure and so on” should be what “naturalism” means, or why a moral value discovered that way necessarily corresponds with the territory. I expect morally-relevant territory to have similarities to other things in physics: to be somehow simple, to have existed long before humans did, and to somehow interact with humans. By the way, I prefer to say “positive valence” over “pleasure” because laymen would misunderstand the latter.
I don’t concede because people having incorrect maps is expected and tells me little about the territory.
I’m not sure what these other dispositions are, but I’m thinking on a level below normativity. I say positive valence is good because, at a level of fundamental physics, it is the best candidate I am aware of for what could be (terminally) good. If you propose that “knowledge is terminally good”, for example, I wouldn’t dismiss it entirely, but I don’t see how human-level knowledge would have a physics-level meaning. It does seem like something related to knowledge, namely comprehension, is part of consciousness, so maybe comprehension is terminally good, but if I could only pick one, it seems to me that valence is a better candidate because “obviously” pleasure+bafflement > torture+comprehension. (fwiw I am thinking that the human sense of comprehension differs from genuine comprehension, and both might even differ from physics-level comprehension if it exists. If a philosopher terminally values the second, I’d call that valuation nonrealist.)
🤷♂️ Why? When you say “expert”, do you mean “moral realist”? But then, which kind of moral realist? Obviously I’m not in the Foot or Railton camp ― in my camp, moral uncertainty follows readily from my axioms, since they tell me there is something morally real, but not what it is.
Edit: It would certainly be interesting if other people start from similar axioms to mine but diverge in their moral opinions. Please let me know if you know of philosopher(s) who start from similar axioms.
I’m clearly talking about expert convergence under ideal reasoning conditions, as discussed earlier. Weird that this wasn’t apparent. In physics or any other scientific domain, there’s no question whether experts would eventually converge if they had ideal reasoning conditions. That’s what makes these domains scientifically valid (i.e., they study “real things”). Why is morality different? (No need to reply; it feels like we’re talking in circles.)
FWIW, I think it’s probably consistent to have a position that includes (1) a wager for moral realism (“if it’s not true, then nothing matters” – your wager is about the importance of qualia, but I’ve also seen similar reasoning around normativity as the bedrock, or free will), and (2), a simplicity/”lack of plausible alternatives” argument for hedonism. This sort of argument for hedonism only works if you take realism for granted, but that’s where the wager comes in handy. (Still, one could argue that tranquilism is ‘simpler’ than hedonism and therefore more likely to be the one true morality, but okay.) Note that this combination of views isn’t quite “being confident in moral realism,” though. It’s only “confidence in acting as though moral realism is true.”
I talk about wagering on moral realism in this dialogue and the preceding post. In short, it seems fanatical to me if taken to its conclusions, and I don’t believe that many people really believe this stuff deep down without any doubt whatsoever. Like, if push comes to shove, do you really have more confidence in your understanding of illusionism vs other views in philosophy of mind, or do you have more confidence in wanting to reduce the thing that Brian Tomasik calls suffering, when you see it in front of you (regardless of whether illusionism turns out to be true)? (Of course, far be it from me to discourage people from taking weird ideas seriously; I’m an EA, after all. I’m just saying that it’s worth reflection if you really buy into that wager wholeheartedly, or if you have some meta uncertainty.)
I also talk a bit about consciousness realism in endnote 18 of my post “Why Realists and Anti-Realists Disagree.” I want to flag that I personally don’t understand why consciousness realism would necessarily imply moral realism. I guess I can see that it gets you closer to it, but I think there’s more to argue for even with consciousness realism. In any case, I think illusionism is being strawmanned in that debate. Illusionists aren’t denying anything worth wanting. Illusionists are only denying something that never made sense in the first place. It’s the same as compatibilists in the free will debate: you never wanted “true free will,” whatever that is. Just like one can be mistaken about one’s visual field having lots of details even at the edges, or how some people with a brain condition can be mistaken about seeing stuff when they have blindsight, illusionists claim that people can be mistaken about some of the properties they ascribe to consciousness. They’re not mistaken about a non-technical interpretation of “it feels like something to be me,” because that’s just how we describe the fact that there’s something that both illusionists and qualia realists are debating. However, illusionists claim that qualia realists are mistaken about a philosophically-loaded interpretation of “it feels like something to be me,” where the hidden assumption is something like “feeling like something is a property that is either on or off for something, and there’s always a fact of the matter.” See the dialogue in endnote 18 of that post on why this isn’t correct (or at least why we cannot infer this from our experience of consciousness.) (This debate is btw very similar to the moral realism vs anti-realism debate. There’s a sense in which anti-realists aren’t denying that “torture is wrong” in a loose and not-too-philosophically loaded sense. They’re just denying that based on “torture is wrong,” we can infer that there’s a fact of the matter about all courses of action – whether they’re right or wrong.) Basically, the point I’m trying to make here is that illusionists aren’t disagreeing with you if you say your conscious. They’re only disagreeing with you when, based on introspecting about your consciousness, you now claim that you know that an omniscient being could tell about every animal/thing/system/process whether it’s conscious or not, that there must be a fact of the matter. But just because it feels to you like there’s a fact of the matter doesn’t mean that there may not be myriads of edge cases where we (or experts under ideal reasoning conditions) can’t draw crisp boundaries about what may or may not be ‘conscious.’ That’s why illusionists like Brian Tomasik end up saying that consciousness is about what kind of algorithms you care about.