(I understand you are very busy this week, so please feel free to respond later.)
Re desires, the main upshot of non-dualist views of consciousness I think is responding to arguments that invoke special properties of conscious states to say they matter but not other concerns of people.
I would say that consciousness seems very plausibly special in that it seems very different from other types of things/entities/stuff we can think or talk or have concerns about. I don’t know if it’s special in a “magical” way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agents’ desires in an impartial way.
So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether (“zombie” is typically defined as “does not have conscious experience”), the upshot seems to be the same: I’m not very convinced of your illusionism, and if I were I still wouldn’t update much toward desire satisfactionism.
I suspect there may be 3 cruxes between us:
I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you don’t.
I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.
I think this is important because it’s plausible that many AI minds will have concerns mainly focused on the external world rather than their own internal states, and running roughshod over those values because they aren’t narrowly mentally-self-focused seems bad to me.
I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I don’t think there’s anything bad about “running roughshod” over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that it’s not an agent, or something else?
If you would bite the bullet, how would you weigh this agent’s desires against other agents’? What specifically in your ethical theory prevents a conclusion like “we should tile the universe with some agent like this because that maximizes overall desire satisfaction?” or “if an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?”
More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepo’s Choose your (preference) utilitarianism carefully. Otherwise it’s liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesn’t lead to counterintuitive conclusions.
(I understand you are very busy this week, so please feel free to respond later.)
I would say that consciousness seems very plausibly special in that it seems very different from other types of things/entities/stuff we can think or talk or have concerns about. I don’t know if it’s special in a “magical” way or some other way (or maybe not special at all), but in any case intuitively it currently seems like the most plausible thing I should care about in an impartially altruistic way. My intuition for this is not super-strong but still far stronger than my intuition for terminally caring about other agents’ desires in an impartial way.
So although I initially misunderstood your position on consciousness as claiming that it does not exist altogether (“zombie” is typically defined as “does not have conscious experience”), the upshot seems to be the same: I’m not very convinced of your illusionism, and if I were I still wouldn’t update much toward desire satisfactionism.
I suspect there may be 3 cruxes between us:
I want to analyze this question in terms of terminal vs instrumental values (or equivalently axiology vs decision theory), and you don’t.
I do not have a high prior or strong intuition that I should be impartially altruistic one way or another.
I see specific issues with desire satisfactionism (see below for example) that makes it seem implausible.
I can write a short program that can be interpreted as an agent that wants to print out as many different primes as it can, while avoiding printing out any non-primes. I don’t think there’s anything bad about “running roughshod” over its desires, e.g., by shutting it off or making it print out non-primes. Would you bite this bullet, or argue that it’s not an agent, or something else?
If you would bite the bullet, how would you weigh this agent’s desires against other agents’? What specifically in your ethical theory prevents a conclusion like “we should tile the universe with some agent like this because that maximizes overall desire satisfaction?” or “if an agentic computer virus made trillions of copies of itself all over the Internet, it would be bad to delete them, and actually their collective desires should dominate our altruistic concerns?”
More generally I think you should write down a concrete formulation of your ethical theory, locking down important attributes such as ones described in @Arepo’s Choose your (preference) utilitarianism carefully. Otherwise it’s liable to look better than it is, similar to how utilitarianism looked better earlier in its history before people tried writing down more concrete formulations and realized that it seems impossible to write down a specific formulation that doesn’t lead to counterintuitive conclusions.