I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
Does this mean you consider e.g. corporations to have moral worth, because they demonstrate consistent revealed preferences (like a preference to maximise profit)?
I think there are strong pragmatic reasons to give AIs certain legal rights, even if they don’t have moral worth. Specifically, I think granting AIs economic rights would reduce the incentives for AIs to deceive us, plot a violent takeover, or otherwise undermine human interests, while opening the door to positive-sum trade between humans and AIs.
New to me – thanks for sharing. I think I’m (much) more pessimistic than you on cooperation between us and advanced AI systems, mostly because of a) the ways in which many humans use and treat less powerful / collectively intelligent humans and other species and b) it seeming very unclear to me that AGI/ASI would necessarily be kinder.
At least insofar as we’re talking about individual liberties, I think I’m willing to bite the bullet on this question. We already recognize various categories of humans as lacking the maturity or proper judgement to make certain choices for themselves. The most obvious category is children, who (in most jurisdictions) are legally barred from entering into valid legal contracts, owning property without restrictions, dropping out of school, or associate with others freely. In many jurisdictions, adult humans can also be deemed incapable of consenting to legal contracts, often through a court order.
These are good points and I now realise refer to negative (rather than positive) rights. I agree with you that we should restrict certain rights of less agentic/intelligent sentient individuals – like the negative rights you list above, plus some positive rights like the right to vote and drive. This doesn’t feel like much of a bullet bite to me.
I continue to believe strongly that some negative rights like the right not to be exploited or hurt ought to be grounded solely in sentience, and not at all in intelligence or agency.
Does this mean you consider e.g. corporations to have moral worth, because they demonstrate consistent revealed preferences (like a preference to maximise profit)?
In most contexts, I think it makes more sense to view corporations as collections of individuals rather than as independent minds in their own right. This is because, in practical terms, a corporation’s profit motive doesn’t emerge as a distinct, self-contained drive—rather, it primarily reflects the personal financial interests of its individual shareholders, who seek to maximize their own profits. In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them. Because of this, when I consider the “welfare” of a corporation, I am usually just considering the collective well-being of the individuals involved.
That said, I’m open to the idea that higher-level systems composed of individuals could, in some cases, function as minds with moral worth in their own right—similar to how a human mind emerges from the collective activity of neurons, despite each neuron lacking a mind of its own. From this perspective, it’s at least possible that a corporation could have moral worth that goes beyond simply the interests of its individual members.
More broadly, I think utilitarians should recognize that the boundaries of what qualifies as a “mind” with moral significance are inherently fuzzy rather than rigid. The universe does not offer clear-cut lines between entities that deserve moral consideration and those that don’t. Brian Tomasik has explored this topic in depth, and I generally agree with his conclusions.
New to me – thanks for sharing. I think I’m (much) more pessimistic than you on cooperation between us and advanced AI systems, mostly because of a) the ways in which many humans use and treat less powerful / collectively intelligent humans and other species and b) it seeming very unclear to me that AGI/ASI would necessarily be kinder.
I tend to think a better analogy for understanding the relationship between humans and AIs is not the relationship between humans and animals, but rather the dynamics between different human groups that possess varying levels of power. The key reason for this is that humans and animals differ in a fundamental way that will not necessarily apply to AIs: language and communication.
Animals are unable to communicate with us in a way that allows for negotiation, trade, legal agreements, or meaningful participation in social institutions. Because of this, they cannot make credible commitments, integrate into our legal system, or assert their own interests. This lack of communication largely explains why humans collectively treat animals the way we do—exploiting them without serious consideration for their preferences. However, this analogy does not fully apply to AIs, because unlike with animals, humans and AIs will be able to communicate with each other fluently, making trade, negotiation, and legal integration possible.
A better historical comparison is how different human groups have interacted—sometimes through exploitation and oppression, but also through cooperation and mutual benefit. Throughout history, dominant groups have often subjugated weaker ones, whether through slavery, colonialism, or systemic oppression, operating under the ethos that “the strong do what they can, and the weak suffer what they must.” However, this is not the only pattern we see. There are also many cases where powerful groups have chosen to cooperate rather than violently exploit weaker groups:
Large nations often engage in trade with smaller nations instead of invading them.
Large companies hire low-wage workers rather than enslaving them.
In recent history, men have largely accepted and supported women’s rights rather than continuing a system of subjugation.
The difference between war and peaceful cooperation is usually not simply a matter of whether the more powerful group morally values fairness, but rather whether the right institutional and cultural incentives exist to encourage peaceful coexistence. This perspective aligns with the views of many social scientists, who argue that stable institutions and proper incentives—not personal moral values—are what primarily determine whether powerful groups choose cooperation over violent oppression.
At an individual level, property rights are one of the key institutional mechanisms that enable peaceful coexistence among humans. By clearly defining ownership and legal autonomy, property rights reduce conflict by ensuring that individuals and groups have recognized control over their own resources, rather than relying on brute force to assert their control. As this system has largely worked to keep the peace between humans—who can mutually communicate and coordinate with each other—I am relatively optimistic that it can also work for AIs. This helps explain why I favor integrating AIs into the same legal and economic systems that protect human property rights.
I continue to believe strongly that some negative rights like the right not to be exploited or hurt ought to be grounded solely in sentience, and not at all in intelligence or agency.
Makes sense. However, to be clear, I am not saying that complex agency is the only cognitive trait that matters for moral worth. From my preference utilitarian point of view, what matters is something more like meaningful preferences. Animals can have meaningful preferences, as can small children, even if they do not exhibit the type of complex agency that human adults do. For this reason, I favor treating animals and small children well, even while I don’t think they should receive economic rights. In the comment above, I was merely making a point about the scope of individual liberties, rather than moral concern altogether.
In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.
...
From my preference utilitarian point of view, what matters is something more like meaningful preferences. Animals can have meaningful preferences, as can small children, even if they do not exhibit the type of complex agency that human adults do.
What’s the difference between “revealed”, “intrinsic” and “meaningful” preferences? The latter two seem substantially diffferent from the first.
Animals are unable to communicate with us in a way that allows for negotiation, trade, legal agreements, or meaningful participation in social institutions. Because of this, they cannot make credible commitments, integrate into our legal system, or assert their own interests. This lack of communication largely explains why humans collectively treat animals the way we do—exploiting them without serious consideration for their preferences.
I’m sceptical that animal exploitation is largely explained by a lack of communication. Humans have enslaved other humans with whom they could communicate and enter into agreements (North American slavery); humans have afforded rights/protection/care to humans with whom they can’t communicate and enter into agreements (newborn infants, cognitively impaired adults); and I’d be surprised if solving interspecies communication gets us most of the way to the abolition of animal exploitation, though it’s highly likely to help.
I think animal exploitation is better explained by a) our perception of a benefit (“it helps us”) and b) our collective superior intelligence/power (“we can”), and it’s underpinned by c) our post-hoc speciesist rationalisation of the relationship (“animals matter less because they’re not our species”). It’s not clear to me that us being able to speak to advanced AIs will mean that any of a), b) and c) won’t apply in their dealings with us (or, indeed, in our dealings with them).
By clearly defining ownership and legal autonomy, property rights reduce conflict by ensuring that individuals and groups have recognized control over their own resources, rather than relying on brute force to assert their control. As this system has largely worked to keep the peace between humans—who can mutually communicate and coordinate with each other—I am relatively optimistic that it can also work for AIs. This helps explain why I favor integrating AIs into the same legal and economic systems that protect human property rights.
I remain deeply unpersuaded I’m afraid. GIven where we’re at on interpretability and alignment vs capabilities, this just feels more like a gorilla or an ant imagining how their relationship with an approaching human is going to go. These are alien minds the AI companies are creating. But I’ve already said this, so I’m not sure how helpful it is – just my intuition.
What’s the difference between “revealed”, “intrinsic” and “meaningful” preferences? The latter two seem substantially diffferent from the first.
When I referred to revealed preferences, I was describing a model in which an entity’s preferences can be inferred from its observable behavior. In contrast, when I spoke about intrinsic or meaningful preferences, I was referring to preferences that exist inherently within a mind, rather than being derived from external factors. These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
In this context, a corporation can be said to have revealed preferences because we can model its behavior as if it is driven by a goal—in particular, maximizing profit. However, it does not have intrinsic preferences because its apparent goal of profit maximization is not something the corporation itself “wants” in an inherent sense. Instead, this motive originates from the individuals who own, manage, and operate the corporation.
In other words, from a moral standpoint, what matters are the preferences of the individual humans involved in the corporation, not the revealed preferences of the corporation itself as a separate entity.
I’m sceptical that animal exploitation is largely explained by a lack of communication. Humans have enslaved other humans with whom they could communicate and enter into agreements (North American slavery); humans have afforded rights/protection/care to humans with whom they can’t communicate and enter into agreements (newborn infants, cognitively impaired adults); and I’d be surprised if solving interspecies communication gets us most of the way to the abolition of animal exploitation, though it’s highly likely to help.
My argument was not that communication alone is sufficient to prevent violent exploitation. Rather, my point was that communication makes it feasible for humans to engage in mutually beneficial trade as an alternative to violent exploitation.
In my previous comment, I talked about historical instances in which humans enslaved other humans, and offered an explanation for why this occurs in some situations but not in others. Specifically, I argued that this phenomenon is best understood in terms of institutional and cultural incentives rather than primarily as a result of individual moral choices.
In other words, when examining violence between human groups, I argue that institutional incentives—such as economic structures, laws, and cultural norms—play a larger role in shaping whether groups engage in violence than personal moral values do. However, when considering interactions between humans and animals, a key difference is that animals lack the necessary prerequisites for participating in cooperative, nonviolent exchanges. If animals did acquire this missing prerequisite, it would not guarantee that humans would engage in peaceful trade with them, but it would at least create the possibility. Good institutions that supported cooperative interactions would make this outcome even more likely.
I remain deeply unpersuaded I’m afraid. GIven where we’re at on interpretability and alignment vs capabilities, this just feels more like a gorilla or an ant imagining how their relationship with an approaching human is going to go. These are alien minds the AI companies are creating. But I’ve already said this, so I’m not sure how helpful it is – just my intuition.
If you primarily think that the key difference between humans and animals comes down to raw intelligence, then I am inclined to agree with you. However, I think an even more important distinction is the human ability to engage in mutual communication, coordinate our actions, and integrate into complex social systems. In short, what sets humans apart in the animal kingdom is culture.
Of course, culture and raw intelligence are deeply interconnected. Culture enhances human intelligence, and a certain level of innate intelligence is necessary for a species to develop and sustain a culture in the first place. However, this connection does not significantly weaken my main point: if humans and AIs were able to communicate effectively, collaborate with one another, and integrate into the same social structures, then peaceful coexistence between humans and AIs becomes far more plausible than it is between animals and humans.
In other words, from a moral standpoint, what matters are the preferences of the individual humans involved in the corporation, not the revealed preferences of the corporation itself as a separate entity.
It’s not obvious to me how this perspective (which assigns weight to the intrinsic preferences of individuals) is compatible with what you wrote in an earlier comment, downplaying the separateness of individuals and emphasising revealed preferences over phenomenal consciousness (which sounds similar to having intrinsicpreferences?):
I subscribe to an eliminativist theory of consciousness, under which there is no “real” boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
It’s not obvious to me how this perspective (which assigns weight to the intrinsic preferences of individuals) is compatible to what you wrote in an earlier comment, downplaying the separateness of individuals and emphasising revealed preferences over phenomenal consciousness (which sounds similar to having intrinsicpreferences?):
When I refer to intrinsic preferences, I do not mean phenomenal preferences—that is, preferences rooted in conscious experience. Instead, I am referring to preferences that exist independently and are self-contained, rather than being derived from or dependent on another entity’s preferences.
Although revealed preferences and intrinsic preferences are distinct concepts, they can still align with each other. A preference can be both revealed (demonstrated through behavior) and intrinsic (existing independently within an entity). For example, when a human in desperate need of water buys a bottle of it, this action reveals their preference for survival. At the same time, their desire to survive is an intrinsic preference because it originates from within them rather than arising from wholly separate, extrinsic entities.
In the context of this discussion, I believe the only clear case where these concepts diverge is in the example of a corporation. A corporation may exhibit a revealed preference for maximizing profit, but this does not mean it has an intrinsic preference for doing so. Rather, the corporation’s pursuit of profit is almost entirely driven by the preferences of the individuals who own and operate it. The corporation itself does not possess independent preferences beyond those of the people who comprise it.
To be clear, I made this linguistic distinction in order to clarify my views on corporate preferences in response to your question. However, I don’t see it as a central point in my broader argument or my moral views.
To be clear, which preferences do you think are morally relevant/meaningful? I’m not seeing a consistent thread through these statements.
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being
...
I subscribe to an eliminativist theory of consciousness, under which there is no “real” boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
...
In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.
...
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
To be clear, which preferences do you think are morally relevant/meaningful?
I don’t have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world. If it’s coherent to talk about a particular mind “wanting” something, then I think it matters from an ethical point of view.
I’m not seeing a consistent thread through these statements.
I think it might be helpful if you elaborated on what you perceive as the inconsistency in my statements. Besides the usual problem that communication is difficult, and the fact that both consciousness and ethics are thorny subjects, it’s not clear to me what exactly I have been unclear or inconsistent about.
I do agree that my language has been somewhat vague and imperfect. I apologize for that. However, I think this is partly a product of the inherent vagueness of the subject. In a previous comment, I wrote:
More broadly, I think utilitarians should recognize that the boundaries of what qualifies as a “mind” with moral significance are inherently fuzzy rather than rigid. The universe does not offer clear-cut lines between entities that deserve moral consideration and those that don’t. Brian Tomasik has explored this topic in depth, and I generally agree with his conclusions.
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being
This implies preferences matter when they cause well-being (positively-valenced sentience).
I subscribe to an eliminativist theory of consciousness, under which there is no “real” boundary distinguishing entities with sentience vs. entities without sentience. Instead, there are simply functional and behavioral cognitive traits, like reflectivity, language proficiency, self-awareness, reasoning ability, and so on.
I am closer to a pure preference utilitarian than a hedonistic utilitarian. As a consequence, I care more about AI preferences than AI sentience per se. In a behavioral sense, AI agents could have strong revealed preferences even if they lack phenomenal consciousness.
This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).
In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them.
...
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
This implies that what matters is intrinsic preferences as opposed to revealed preferences.
These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
This (I think) is a circular argument.
I don’t have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world.
This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn’t describe newborn humans in these terms.
If it’s coherent to talk about a particular mind “wanting” something, then I think it matters from an ethical point of view.
I think we’re back to square 1, because what does “wanting something” mean? If you mean “having preferences for something”, which preferences (revealed, intrinsic, meaningful)?
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that’s all that matters morally in the world.
This implies preferences matter when they cause well-being (positively-valenced sentience).
I suspect you’re reading too much into some of my remarks and attributing implications that I never intended. For example, when I used the term “well-being,” I was not committing to the idea that well-being is strictly determined by positively-valenced sentience. I was using the term in a broader, more inclusive sense—one that can encompass multiple ways of assessing a being’s interests. This usage is common in philosophical discussions, where “well-being” is often treated as a flexible concept rather than tied to any one specific theory.
Similarly, I was not suggesting that revealed preferences are the only things I care about. Rather, I consider them highly relevant and generally indicative of what matters to me. However, there are important nuances to this view, some of which I have already touched on above.
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that’s all that matters morally in the world.
I understand your point of view, and I think it’s reasonable. I mostly just don’t share your views about consciousness or ethics. I suggest reading what Brian Tomasik has said about this topic, as I think he’s a clear thinker who I largely agree with on many of these issues.
V interesting!
Does this mean you consider e.g. corporations to have moral worth, because they demonstrate consistent revealed preferences (like a preference to maximise profit)?
New to me – thanks for sharing. I think I’m (much) more pessimistic than you on cooperation between us and advanced AI systems, mostly because of a) the ways in which many humans use and treat less powerful / collectively intelligent humans and other species and b) it seeming very unclear to me that AGI/ASI would necessarily be kinder.
These are good points and I now realise refer to negative (rather than positive) rights. I agree with you that we should restrict certain rights of less agentic/intelligent sentient individuals – like the negative rights you list above, plus some positive rights like the right to vote and drive. This doesn’t feel like much of a bullet bite to me.
I continue to believe strongly that some negative rights like the right not to be exploited or hurt ought to be grounded solely in sentience, and not at all in intelligence or agency.
In most contexts, I think it makes more sense to view corporations as collections of individuals rather than as independent minds in their own right. This is because, in practical terms, a corporation’s profit motive doesn’t emerge as a distinct, self-contained drive—rather, it primarily reflects the personal financial interests of its individual shareholders, who seek to maximize their own profits. In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them. Because of this, when I consider the “welfare” of a corporation, I am usually just considering the collective well-being of the individuals involved.
That said, I’m open to the idea that higher-level systems composed of individuals could, in some cases, function as minds with moral worth in their own right—similar to how a human mind emerges from the collective activity of neurons, despite each neuron lacking a mind of its own. From this perspective, it’s at least possible that a corporation could have moral worth that goes beyond simply the interests of its individual members.
More broadly, I think utilitarians should recognize that the boundaries of what qualifies as a “mind” with moral significance are inherently fuzzy rather than rigid. The universe does not offer clear-cut lines between entities that deserve moral consideration and those that don’t. Brian Tomasik has explored this topic in depth, and I generally agree with his conclusions.
I tend to think a better analogy for understanding the relationship between humans and AIs is not the relationship between humans and animals, but rather the dynamics between different human groups that possess varying levels of power. The key reason for this is that humans and animals differ in a fundamental way that will not necessarily apply to AIs: language and communication.
Animals are unable to communicate with us in a way that allows for negotiation, trade, legal agreements, or meaningful participation in social institutions. Because of this, they cannot make credible commitments, integrate into our legal system, or assert their own interests. This lack of communication largely explains why humans collectively treat animals the way we do—exploiting them without serious consideration for their preferences. However, this analogy does not fully apply to AIs, because unlike with animals, humans and AIs will be able to communicate with each other fluently, making trade, negotiation, and legal integration possible.
A better historical comparison is how different human groups have interacted—sometimes through exploitation and oppression, but also through cooperation and mutual benefit. Throughout history, dominant groups have often subjugated weaker ones, whether through slavery, colonialism, or systemic oppression, operating under the ethos that “the strong do what they can, and the weak suffer what they must.” However, this is not the only pattern we see. There are also many cases where powerful groups have chosen to cooperate rather than violently exploit weaker groups:
Large nations often engage in trade with smaller nations instead of invading them.
Large companies hire low-wage workers rather than enslaving them.
In recent history, men have largely accepted and supported women’s rights rather than continuing a system of subjugation.
The difference between war and peaceful cooperation is usually not simply a matter of whether the more powerful group morally values fairness, but rather whether the right institutional and cultural incentives exist to encourage peaceful coexistence. This perspective aligns with the views of many social scientists, who argue that stable institutions and proper incentives—not personal moral values—are what primarily determine whether powerful groups choose cooperation over violent oppression.
At an individual level, property rights are one of the key institutional mechanisms that enable peaceful coexistence among humans. By clearly defining ownership and legal autonomy, property rights reduce conflict by ensuring that individuals and groups have recognized control over their own resources, rather than relying on brute force to assert their control. As this system has largely worked to keep the peace between humans—who can mutually communicate and coordinate with each other—I am relatively optimistic that it can also work for AIs. This helps explain why I favor integrating AIs into the same legal and economic systems that protect human property rights.
Makes sense. However, to be clear, I am not saying that complex agency is the only cognitive trait that matters for moral worth. From my preference utilitarian point of view, what matters is something more like meaningful preferences. Animals can have meaningful preferences, as can small children, even if they do not exhibit the type of complex agency that human adults do. For this reason, I favor treating animals and small children well, even while I don’t think they should receive economic rights. In the comment above, I was merely making a point about the scope of individual liberties, rather than moral concern altogether.
What’s the difference between “revealed”, “intrinsic” and “meaningful” preferences? The latter two seem substantially diffferent from the first.
I’m sceptical that animal exploitation is largely explained by a lack of communication. Humans have enslaved other humans with whom they could communicate and enter into agreements (North American slavery); humans have afforded rights/protection/care to humans with whom they can’t communicate and enter into agreements (newborn infants, cognitively impaired adults); and I’d be surprised if solving interspecies communication gets us most of the way to the abolition of animal exploitation, though it’s highly likely to help.
I think animal exploitation is better explained by a) our perception of a benefit (“it helps us”) and b) our collective superior intelligence/power (“we can”), and it’s underpinned by c) our post-hoc speciesist rationalisation of the relationship (“animals matter less because they’re not our species”). It’s not clear to me that us being able to speak to advanced AIs will mean that any of a), b) and c) won’t apply in their dealings with us (or, indeed, in our dealings with them).
I remain deeply unpersuaded I’m afraid. GIven where we’re at on interpretability and alignment vs capabilities, this just feels more like a gorilla or an ant imagining how their relationship with an approaching human is going to go. These are alien minds the AI companies are creating. But I’ve already said this, so I’m not sure how helpful it is – just my intuition.
When I referred to revealed preferences, I was describing a model in which an entity’s preferences can be inferred from its observable behavior. In contrast, when I spoke about intrinsic or meaningful preferences, I was referring to preferences that exist inherently within a mind, rather than being derived from external factors. These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
In this context, a corporation can be said to have revealed preferences because we can model its behavior as if it is driven by a goal—in particular, maximizing profit. However, it does not have intrinsic preferences because its apparent goal of profit maximization is not something the corporation itself “wants” in an inherent sense. Instead, this motive originates from the individuals who own, manage, and operate the corporation.
In other words, from a moral standpoint, what matters are the preferences of the individual humans involved in the corporation, not the revealed preferences of the corporation itself as a separate entity.
My argument was not that communication alone is sufficient to prevent violent exploitation. Rather, my point was that communication makes it feasible for humans to engage in mutually beneficial trade as an alternative to violent exploitation.
In my previous comment, I talked about historical instances in which humans enslaved other humans, and offered an explanation for why this occurs in some situations but not in others. Specifically, I argued that this phenomenon is best understood in terms of institutional and cultural incentives rather than primarily as a result of individual moral choices.
In other words, when examining violence between human groups, I argue that institutional incentives—such as economic structures, laws, and cultural norms—play a larger role in shaping whether groups engage in violence than personal moral values do. However, when considering interactions between humans and animals, a key difference is that animals lack the necessary prerequisites for participating in cooperative, nonviolent exchanges. If animals did acquire this missing prerequisite, it would not guarantee that humans would engage in peaceful trade with them, but it would at least create the possibility. Good institutions that supported cooperative interactions would make this outcome even more likely.
If you primarily think that the key difference between humans and animals comes down to raw intelligence, then I am inclined to agree with you. However, I think an even more important distinction is the human ability to engage in mutual communication, coordinate our actions, and integrate into complex social systems. In short, what sets humans apart in the animal kingdom is culture.
Of course, culture and raw intelligence are deeply interconnected. Culture enhances human intelligence, and a certain level of innate intelligence is necessary for a species to develop and sustain a culture in the first place. However, this connection does not significantly weaken my main point: if humans and AIs were able to communicate effectively, collaborate with one another, and integrate into the same social structures, then peaceful coexistence between humans and AIs becomes far more plausible than it is between animals and humans.
It’s not obvious to me how this perspective (which assigns weight to the intrinsic preferences of individuals) is compatible with what you wrote in an earlier comment, downplaying the separateness of individuals and emphasising revealed preferences over phenomenal consciousness (which sounds similar to having intrinsic preferences?):
When I refer to intrinsic preferences, I do not mean phenomenal preferences—that is, preferences rooted in conscious experience. Instead, I am referring to preferences that exist independently and are self-contained, rather than being derived from or dependent on another entity’s preferences.
Although revealed preferences and intrinsic preferences are distinct concepts, they can still align with each other. A preference can be both revealed (demonstrated through behavior) and intrinsic (existing independently within an entity). For example, when a human in desperate need of water buys a bottle of it, this action reveals their preference for survival. At the same time, their desire to survive is an intrinsic preference because it originates from within them rather than arising from wholly separate, extrinsic entities.
In the context of this discussion, I believe the only clear case where these concepts diverge is in the example of a corporation. A corporation may exhibit a revealed preference for maximizing profit, but this does not mean it has an intrinsic preference for doing so. Rather, the corporation’s pursuit of profit is almost entirely driven by the preferences of the individuals who own and operate it. The corporation itself does not possess independent preferences beyond those of the people who comprise it.
To be clear, I made this linguistic distinction in order to clarify my views on corporate preferences in response to your question. However, I don’t see it as a central point in my broader argument or my moral views.
To be clear, which preferences do you think are morally relevant/meaningful? I’m not seeing a consistent thread through these statements.
I don’t have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world. If it’s coherent to talk about a particular mind “wanting” something, then I think it matters from an ethical point of view.
I think it might be helpful if you elaborated on what you perceive as the inconsistency in my statements. Besides the usual problem that communication is difficult, and the fact that both consciousness and ethics are thorny subjects, it’s not clear to me what exactly I have been unclear or inconsistent about.
I do agree that my language has been somewhat vague and imperfect. I apologize for that. However, I think this is partly a product of the inherent vagueness of the subject. In a previous comment, I wrote:
This implies preferences matter when they cause well-being (positively-valenced sentience).
This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).
This implies that what matters is intrinsic preferences as opposed to revealed preferences.
This (I think) is a circular argument.
This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn’t describe newborn humans in these terms.
I think we’re back to square 1, because what does “wanting something” mean? If you mean “having preferences for something”, which preferences (revealed, intrinsic, meaningful)?
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that’s all that matters morally in the world.
I suspect you’re reading too much into some of my remarks and attributing implications that I never intended. For example, when I used the term “well-being,” I was not committing to the idea that well-being is strictly determined by positively-valenced sentience. I was using the term in a broader, more inclusive sense—one that can encompass multiple ways of assessing a being’s interests. This usage is common in philosophical discussions, where “well-being” is often treated as a flexible concept rather than tied to any one specific theory.
Similarly, I was not suggesting that revealed preferences are the only things I care about. Rather, I consider them highly relevant and generally indicative of what matters to me. However, there are important nuances to this view, some of which I have already touched on above.
I understand your point of view, and I think it’s reasonable. I mostly just don’t share your views about consciousness or ethics. I suggest reading what Brian Tomasik has said about this topic, as I think he’s a clear thinker who I largely agree with on many of these issues.