Just to back this up, since Wei has mentioned it, it does seem like a lot of the Open-Phil-cluster is to varying extents bought into illusionism. I think this is a highly controversial view, especially for those outside of Analytical Philosophy of Mind (and even within the field many people argue against it, I basically agree with Galen Strawsonâs negative take on it as an entire approach to consciousness).
We have evidence here that Carl is somewhat bought in from the original post here and Weiâs comment
Not explicitly in the Open Phil cluster but Keith Frankish was on the Hear This Idea Podcast talking about illusionism (see here). I know itâs about introducing the host and their ideas but I think they could have been more upfront about the radical implications about illusionism.[1]
I donât want to have an argument about phenomenal consciousness in this thread,[2] I just want to point out that it does seem to be potential signs of a consensus on a controversial philosophical premise,[3] perhaps without it being given the scrutiny or justification it deserves.
It seems to me, to lead to eliminativism, or simply redefine consciousness into something people donât mean in the same way the Dennett redefines âfree willâ into something that many people find unsatisfactory.
I have cut content and tried to alter my tone to avoid this. If you do want to go 12 rounds of strong illusionism vs qualia realism then by all means send me a DM.
Reading that, it appears Muehlhauserâs illusionism (perhaps unlike Carlâs although I donât have details on Carlâs views) is a form that does not imply that consciousness does not exist nor strongly motivates desire satisfactionism:
There is âsomething it is likeâ to be us, and I doubt there it is âsomething it is likeâ to be a chess-playing computer, and I think the difference is morally important. I just think our intuitions mislead us about some of the properties of this âsomething itâs likeâ-ness.
I donât want to have an argument about phenomenal consciousness in this thread
Maybe copy-paste your cut content into a short-form post? I would be interested in reading it. My own view is that some version of dualism seems pretty plausible, given that my experiences/âqualia seem obviously real/âexistent in some ontological sense (since it can be differentiated/âdescribed by some language), and seem like a different sort of thing from physical systems (which are describable by a largely distinct language). However I havenât thought a ton about this topic or dived into the literature, figuring that itâs probably a hard problem that canât be conclusively resolved at this point.
Physicalists and illusionists mostly donât agree with the identification of âconsciousnessâ with magical stuff or properties bolted onto the psychological or cognitive science picture of minds. All the real feelings and psychology that drive our thinking, speech and action exist. I care about peopleâs welfare, including experiences they like, but also other concerns they have (the welfare of their children, being remembered after they die), and that doesnât hinge on magical consciousness that we, the physical organisms having this conversation, would have no access to. The illusion is of the magical part.
Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people. Itâs still possible to be a physicalist and think that only selfish preferences focused on your own sense impressions or introspection matter, it just looks more arbitrary.
I think this is important because itâs plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they arenât narrowly mentally-self-focused seems bad to me.
(I understand you are very busy this week, so please feel free to respond later.)
Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people.
I would say that consciousness seems very plausibly special in that it seems very different from other types of things/âentities/âstuff we can think or talk or have concerns about. I donât know if itâs special in a âmagicalâ way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agentsâ desires in an impartial way.
So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether (âzombieâ is typically defined as âdoes not have conscious experienceâ), the upshot seems to be the same: Iâm not very convinced of your illusionism, and if I were I still wouldnât update much toward desire satisfactionism.
I suspect there may be 3 cruxes between us:
I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you donât.
I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.
I think this is important because itâs plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they arenât narrowly mentally-self-focused seems bad to me.
I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I donât think thereâs anything bad about ârunning roughshodâ over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that itâs not an agent, or something else?
If you would bite the bullet, how would you weigh this agentâs desires against other agentsâ? What specifically in your ethical theory prevents a conclusion like âwe should tile the universe with some agent like this because that maximizes overall desire satisfaction?â or âif an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?â
More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepoâs Choose your (preference) utilitarianism carefully. Otherwise itâs liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesnât lead to counterintuitive conclusions.
Just to back this up, since Wei has mentioned it, it does seem like a lot of the Open-Phil-cluster is to varying extents bought into illusionism. I think this is a highly controversial view, especially for those outside of Analytical Philosophy of Mind (and even within the field many people argue against it, I basically agree with Galen Strawsonâs negative take on it as an entire approach to consciousness).
We have evidence here that Carl is somewhat bought in from the original post here and Weiâs comment
The 2017 Report on Consciousness and Moral Patienthood by Muehlhauser assumes illusionism about human consciousness to be true.
Not explicitly in the Open Phil cluster but Keith Frankish was on the Hear This Idea Podcast talking about illusionism (see here). I know itâs about introducing the host and their ideas but I think they could have been more upfront about the radical implications about illusionism.[1]
I donât want to have an argument about phenomenal consciousness in this thread,[2] I just want to point out that it does seem to be potential signs of a consensus on a controversial philosophical premise,[3] perhaps without it being given the scrutiny or justification it deserves.
It seems to me, to lead to eliminativism, or simply redefine consciousness into something people donât mean in the same way the Dennett redefines âfree willâ into something that many people find unsatisfactory.
I have cut content and tried to alter my tone to avoid this. If you do want to go 12 rounds of strong illusionism vs qualia realism then by all means send me a DM.
(that you, dear reader, are not conscious, and that you never have been, and no current or future beings either can or will be)
Reading that, it appears Muehlhauserâs illusionism (perhaps unlike Carlâs although I donât have details on Carlâs views) is a form that does not imply that consciousness does not exist nor strongly motivates desire satisfactionism:
Maybe copy-paste your cut content into a short-form post? I would be interested in reading it. My own view is that some version of dualism seems pretty plausible, given that my experiences/âqualia seem obviously real/âexistent in some ontological sense (since it can be differentiated/âdescribed by some language), and seem like a different sort of thing from physical systems (which are describable by a largely distinct language). However I havenât thought a ton about this topic or dived into the literature, figuring that itâs probably a hard problem that canât be conclusively resolved at this point.
Physicalists and illusionists mostly donât agree with the identification of âconsciousnessâ with magical stuff or properties bolted onto the psychological or cognitive science picture of minds. All the real feelings and psychology that drive our thinking, speech and action exist. I care about peopleâs welfare, including experiences they like, but also other concerns they have (the welfare of their children, being remembered after they die), and that doesnât hinge on magical consciousness that we, the physical organisms having this conversation, would have no access to. The illusion is of the magical part.
Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people. Itâs still possible to be a physicalist and think that only selfish preferences focused on your own sense impressions or introspection matter, it just looks more arbitrary.
I think this is important because itâs plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they arenât narrowly mentally-self-focused seems bad to me.
(I understand you are very busy this week, so please feel free to respond later.)
I would say that consciousness seems very plausibly special in that it seems very different from other types of things/âentities/âstuff we can think or talk or have concerns about. I donât know if itâs special in a âmagicalâ way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agentsâ desires in an impartial way.
So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether (âzombieâ is typically defined as âdoes not have conscious experienceâ), the upshot seems to be the same: Iâm not very convinced of your illusionism, and if I were I still wouldnât update much toward desire satisfactionism.
I suspect there may be 3 cruxes between us:
I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you donât.
I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.
I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I donât think thereâs anything bad about ârunning roughshodâ over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that itâs not an agent, or something else?
If you would bite the bullet, how would you weigh this agentâs desires against other agentsâ? What specifically in your ethical theory prevents a conclusion like âwe should tile the universe with some agent like this because that maximizes overall desire satisfaction?â or âif an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?â
More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepoâs Choose your (preference) utilitarianism carefully. Otherwise itâs liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesnât lead to counterintuitive conclusions.