Here he is following a cluster of views in philosophy that hold that consciousness is not necessary for moral status. Rather, an entity, even if it is not conscious, can merit moral consideration if it has a certain kind of **agency: **preferences, desires, goals, interests, and the like.
The articles you cite, and Carl himself (via private discussion) all cite the possibility that there is no such thing as consciousness (illusionism, “physicalist/zombie world”) as the main motivation for this moral stance (named “Desire Satisfactionism” by one of the papers).
But from my perspective, a very plausible reason that altruism is normative is that axiologically/terminally caring about consciousness is normative. If it turns out that consciousness is not a thing, then my credence assigned to this position wouldn’t all go into desire satisfactionism (which BTW I think has various problems that none of the sources try to address), and would instead largely be reallocated to other less altruistic axiological systems, such as egoism, nihilism, and satisfying my various idiosyncratic interests (intellectual curiosity, etc.). These positions imply caring about other agents’ preferences/desires only in an instrumental way, via whatever decision theory is normative. I’m uncertain what decision theory is normative, but it seems quite plausible that this implies I should care relatively little for certain agents’ preferences/desires, e.g., because they can’t reciprocate.
So based on what I’ve read so far, desire satisfactionism seems under motivated/justified.
Just to back this up, since Wei has mentioned it, it does seem like a lot of the Open-Phil-cluster is to varying extents bought into illusionism. I think this is a highly controversial view, especially for those outside of Analytical Philosophy of Mind (and even within the field many people argue against it, I basically agree with Galen Strawson’s negative take on it as an entire approach to consciousness).
We have evidence here that Carl is somewhat bought in from the original post here and Wei’s comment
Not explicitly in the Open Phil cluster but Keith Frankish was on the Hear This Idea Podcast talking about illusionism (see here). I know it’s about introducing the host and their ideas but I think they could have been more upfront about the radical implications about illusionism.[1]
I don’t want to have an argument about phenomenal consciousness in this thread,[2] I just want to point out that it does seem to be potential signs of a consensus on a controversial philosophical premise,[3] perhaps without it being given the scrutiny or justification it deserves.
It seems to me, to lead to eliminativism, or simply redefine consciousness into something people don’t mean in the same way the Dennett redefines ‘free will’ into something that many people find unsatisfactory.
I have cut content and tried to alter my tone to avoid this. If you do want to go 12 rounds of strong illusionism vs qualia realism then by all means send me a DM.
Reading that, it appears Muehlhauser’s illusionism (perhaps unlike Carl’s although I don’t have details on Carl’s views) is a form that does not imply that consciousness does not exist nor strongly motivates desire satisfactionism:
There is “something it is like” to be us, and I doubt there it is “something it is like” to be a chess-playing computer, and I think the difference is morally important. I just think our intuitions mislead us about some of the properties of this “something it’s like”-ness.
I don’t want to have an argument about phenomenal consciousness in this thread
Maybe copy-paste your cut content into a short-form post? I would be interested in reading it. My own view is that some version of dualism seems pretty plausible, given that my experiences/qualia seem obviously real/existent in some ontological sense (since it can be differentiated/described by some language), and seem like a different sort of thing from physical systems (which are describable by a largely distinct language). However I haven’t thought a ton about this topic or dived into the literature, figuring that it’s probably a hard problem that can’t be conclusively resolved at this point.
Physicalists and illusionists mostly don’t agree with the identification of ‘consciousness’ with magical stuff or properties bolted onto the psychological or cognitive science picture of minds. All the real feelings and psychology that drive our thinking, speech and action exist. I care about people’s welfare, including experiences they like, but also other concerns they have (the welfare of their children, being remembered after they die), and that doesn’t hinge on magical consciousness that we, the physical organisms having this conversation, would have no access to. The illusion is of the magical part.
Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people. It’s still possible to be a physicalist and think that only selfish preferences focused on your own sense impressions or introspection matter, it just looks more arbitrary.
I think this is important because it’s plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they aren’t narrowly mentally-self-focused seems bad to me.
(I understand you are very busy this week, so please feel free to respond later.)
Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people.
I would say that consciousness seems very plausibly special in that it seems very different from other types of things/entities/stuff we can think or talk or have concerns about. I don’t know if it’s special in a “magical” way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agents’ desires in an impartial way.
So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether (“zombie” is typically defined as “does not have conscious experience”), the upshot seems to be the same: I’m not very convinced of your illusionism, and if I were I still wouldn’t update much toward desire satisfactionism.
I suspect there may be 3 cruxes between us:
I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you don’t.
I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.
I think this is important because it’s plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they aren’t narrowly mentally-self-focused seems bad to me.
I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I don’t think there’s anything bad about “running roughshod” over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that it’s not an agent, or something else?
If you would bite the bullet, how would you weigh this agent’s desires against other agents’? What specifically in your ethical theory prevents a conclusion like “we should tile the universe with some agent like this because that maximizes overall desire satisfaction?” or “if an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?”
More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepo’s Choose your (preference) utilitarianism carefully. Otherwise it’s liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesn’t lead to counterintuitive conclusions.
Illusionism doesn’t deny consciousness, but instead denies that consciousness is phenomenal. Whatever consciousness turns out to be could still play the same role in ethics. This wouldn’t specifically require a move towards desire satisfactionism.
However, one way to motivate desire satisfactionism is that desires — if understood broadly enough to mean any appearance that something matters, is good, bad, better or worse, etc., including pleasure, unpleasantness, more narrowly understood desires, moral views, goals, etc. — capture all the ways anything can “care” about or be motivated by anything. I discuss this a bit more here. They could also ground a form of morally relevant consciousness, at least minimally, if it’s all gradualist under illusionism anyway (see also my comment here). So, then they could capture all morally relevant consciousness, i.e. all the ways anything can consciously care about anything.
I don’t really see why we should care about more narrowly defined desires to the exclusion of hedonic states, say (or vice versa). It seems to me that both matter. But I don’t know if Carl or others intend to exclude hedonic states.
The articles you cite, and Carl himself (via private discussion) all cite the possibility that there is no such thing as consciousness (illusionism, “physicalist/zombie world”) as the main motivation for this moral stance (named “Desire Satisfactionism” by one of the papers).
But from my perspective, a very plausible reason that altruism is normative is that axiologically/terminally caring about consciousness is normative. If it turns out that consciousness is not a thing, then my credence assigned to this position wouldn’t all go into desire satisfactionism (which BTW I think has various problems that none of the sources try to address), and would instead largely be reallocated to other less altruistic axiological systems, such as egoism, nihilism, and satisfying my various idiosyncratic interests (intellectual curiosity, etc.). These positions imply caring about other agents’ preferences/desires only in an instrumental way, via whatever decision theory is normative. I’m uncertain what decision theory is normative, but it seems quite plausible that this implies I should care relatively little for certain agents’ preferences/desires, e.g., because they can’t reciprocate.
So based on what I’ve read so far, desire satisfactionism seems under motivated/justified.
Just to back this up, since Wei has mentioned it, it does seem like a lot of the Open-Phil-cluster is to varying extents bought into illusionism. I think this is a highly controversial view, especially for those outside of Analytical Philosophy of Mind (and even within the field many people argue against it, I basically agree with Galen Strawson’s negative take on it as an entire approach to consciousness).
We have evidence here that Carl is somewhat bought in from the original post here and Wei’s comment
The 2017 Report on Consciousness and Moral Patienthood by Muehlhauser assumes illusionism about human consciousness to be true.
Not explicitly in the Open Phil cluster but Keith Frankish was on the Hear This Idea Podcast talking about illusionism (see here). I know it’s about introducing the host and their ideas but I think they could have been more upfront about the radical implications about illusionism.[1]
I don’t want to have an argument about phenomenal consciousness in this thread,[2] I just want to point out that it does seem to be potential signs of a consensus on a controversial philosophical premise,[3] perhaps without it being given the scrutiny or justification it deserves.
It seems to me, to lead to eliminativism, or simply redefine consciousness into something people don’t mean in the same way the Dennett redefines ‘free will’ into something that many people find unsatisfactory.
I have cut content and tried to alter my tone to avoid this. If you do want to go 12 rounds of strong illusionism vs qualia realism then by all means send me a DM.
(that you, dear reader, are not conscious, and that you never have been, and no current or future beings either can or will be)
Reading that, it appears Muehlhauser’s illusionism (perhaps unlike Carl’s although I don’t have details on Carl’s views) is a form that does not imply that consciousness does not exist nor strongly motivates desire satisfactionism:
Maybe copy-paste your cut content into a short-form post? I would be interested in reading it. My own view is that some version of dualism seems pretty plausible, given that my experiences/qualia seem obviously real/existent in some ontological sense (since it can be differentiated/described by some language), and seem like a different sort of thing from physical systems (which are describable by a largely distinct language). However I haven’t thought a ton about this topic or dived into the literature, figuring that it’s probably a hard problem that can’t be conclusively resolved at this point.
Physicalists and illusionists mostly don’t agree with the identification of ‘consciousness’ with magical stuff or properties bolted onto the psychological or cognitive science picture of minds. All the real feelings and psychology that drive our thinking, speech and action exist. I care about people’s welfare, including experiences they like, but also other concerns they have (the welfare of their children, being remembered after they die), and that doesn’t hinge on magical consciousness that we, the physical organisms having this conversation, would have no access to. The illusion is of the magical part.
Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people. It’s still possible to be a physicalist and think that only selfish preferences focused on your own sense impressions or introspection matter, it just looks more arbitrary.
I think this is important because it’s plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they aren’t narrowly mentally-self-focused seems bad to me.
(I understand you are very busy this week, so please feel free to respond later.)
I would say that consciousness seems very plausibly special in that it seems very different from other types of things/entities/stuff we can think or talk or have concerns about. I don’t know if it’s special in a “magical” way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agents’ desires in an impartial way.
So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether (“zombie” is typically defined as “does not have conscious experience”), the upshot seems to be the same: I’m not very convinced of your illusionism, and if I were I still wouldn’t update much toward desire satisfactionism.
I suspect there may be 3 cruxes between us:
I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you don’t.
I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.
I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I don’t think there’s anything bad about “running roughshod” over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that it’s not an agent, or something else?
If you would bite the bullet, how would you weigh this agent’s desires against other agents’? What specifically in your ethical theory prevents a conclusion like “we should tile the universe with some agent like this because that maximizes overall desire satisfaction?” or “if an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?”
More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepo’s Choose your (preference) utilitarianism carefully. Otherwise it’s liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesn’t lead to counterintuitive conclusions.
Illusionism doesn’t deny consciousness, but instead denies that consciousness is phenomenal. Whatever consciousness turns out to be could still play the same role in ethics. This wouldn’t specifically require a move towards desire satisfactionism.
However, one way to motivate desire satisfactionism is that desires — if understood broadly enough to mean any appearance that something matters, is good, bad, better or worse, etc., including pleasure, unpleasantness, more narrowly understood desires, moral views, goals, etc. — capture all the ways anything can “care” about or be motivated by anything. I discuss this a bit more here. They could also ground a form of morally relevant consciousness, at least minimally, if it’s all gradualist under illusionism anyway (see also my comment here). So, then they could capture all morally relevant consciousness, i.e. all the ways anything can consciously care about anything.
I don’t really see why we should care about more narrowly defined desires to the exclusion of hedonic states, say (or vice versa). It seems to me that both matter. But I don’t know if Carl or others intend to exclude hedonic states.