To be clear, which preferences do you think are morally relevant/meaningful? I’m not seeing a consistent thread through these statements.
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being
...
I subscribe to an eliminativist theory of consciousness, under which there is no “real” boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
...
In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.
...
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
To be clear, which preferences do you think are morally relevant/meaningful?
I don’t have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world. If it’s coherent to talk about a particular mind “wanting” something, then I think it matters from an ethical point of view.
I’m not seeing a consistent thread through these statements.
I think it might be helpful if you elaborated on what you perceive as the inconsistency in my statements. Besides the usual problem that communication is difficult, and the fact that both consciousness and ethics are thorny subjects, it’s not clear to me what exactly I have been unclear or inconsistent about.
I do agree that my language has been somewhat vague and imperfect. I apologize for that. However, I think this is partly a product of the inherent vagueness of the subject. In a previous comment, I wrote:
More broadly, I think utilitarians should recognize that the boundaries of what qualifies as a “mind” with moral significance are inherently fuzzy rather than rigid. The universe does not offer clear-cut lines between entities that deserve moral consideration and those that don’t. Brian Tomasik has explored this topic in depth, and I generally agree with his conclusions.
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being
This implies preferences matter when they cause well-being (positively-valenced sentience).
I subscribe to an eliminativist theory of consciousness, under which there is no “real” boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).
In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.
...
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
This implies that what matters is intrinsic preferences as opposed to revealed preferences.
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
This (I think) is a circular argument.
I don’t have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world.
This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn’t describe newborn humans in these terms.
If it’s coherent to talk about a particular mind “wanting” something, then I think it matters from an ethical point of view.
I think we’re back to square 1, because what does “wanting something” mean? If you mean “having preferences for something”, which preferences (revealed, intrinsic, meaningful)?
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that’s all that matters morally in the world.
This implies preferences matter when they cause well-being (positively-valenced sentience).
I suspect you’re reading too much into some of my remarks and attributing implications that I never intended. For example, when I used the term “well-being,” I was not committing to the idea that well-being is strictly determined by positively-valenced sentience. I was using the term in a broader, more inclusive sense—one that can encompass multiple ways of assessing a being’s interests. This usage is common in philosophical discussions, where “well-being” is often treated as a flexible concept rather than tied to any one specific theory.
Similarly, I was not suggesting that revealed preferences are the only things I care about. Rather, I consider them highly relevant and generally indicative of what matters to me. However, there are important nuances to this view, some of which I have already touched on above.
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that’s all that matters morally in the world.
I understand your point of view, and I think it’s reasonable. I mostly just don’t share your views about consciousness or ethics. I suggest reading what Brian Tomasik has said about this topic, as I think he’s a clear thinker who I largely agree with on many of these issues.
To be clear, which preferences do you think are morally relevant/meaningful? I’m not seeing a consistent thread through these statements.
I don’t have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world. If it’s coherent to talk about a particular mind “wanting” something, then I think it matters from an ethical point of view.
I think it might be helpful if you elaborated on what you perceive as the inconsistency in my statements. Besides the usual problem that communication is difficult, and the fact that both consciousness and ethics are thorny subjects, it’s not clear to me what exactly I have been unclear or inconsistent about.
I do agree that my language has been somewhat vague and imperfect. I apologize for that. However, I think this is partly a product of the inherent vagueness of the subject. In a previous comment, I wrote:
This implies preferences matter when they cause well-being (positively-valenced sentience).
This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).
This implies that what matters is intrinsic preferences as opposed to revealed preferences.
This (I think) is a circular argument.
This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn’t describe newborn humans in these terms.
I think we’re back to square 1, because what does “wanting something” mean? If you mean “having preferences for something”, which preferences (revealed, intrinsic, meaningful)?
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that’s all that matters morally in the world.
I suspect you’re reading too much into some of my remarks and attributing implications that I never intended. For example, when I used the term “well-being,” I was not committing to the idea that well-being is strictly determined by positively-valenced sentience. I was using the term in a broader, more inclusive sense—one that can encompass multiple ways of assessing a being’s interests. This usage is common in philosophical discussions, where “well-being” is often treated as a flexible concept rather than tied to any one specific theory.
Similarly, I was not suggesting that revealed preferences are the only things I care about. Rather, I consider them highly relevant and generally indicative of what matters to me. However, there are important nuances to this view, some of which I have already touched on above.
I understand your point of view, and I think it’s reasonable. I mostly just don’t share your views about consciousness or ethics. I suggest reading what Brian Tomasik has said about this topic, as I think he’s a clear thinker who I largely agree with on many of these issues.