I am having a hard time following this. We arenât, to my knowledge, asking people whose loved ones are at significant risk of dying of malaria and TB for money. AFAIK, weâre not asking them to prioritize animal welfare over their loved ones in non-finamcial ways either. Could you explain what specifically weâre asking of this class of people?
On top of Jasonâs point, this argument presupposes that animals are food and therefore not worthy of much if any moral concern, but there are many reasons to think animals are worthy of moral concern.
Are we not discussing the situation with them? What about a Rawlsian veil of ignorance? A social contract? If these people were in the same room with you, a mother holding her dying child in her arms, and you were holding a community meeting about whether to save her child or save a cage with some chickens in it⊠wouldnât she be expected to have a right to at least argue in favor of her childâs life?
The very fact that humans are able to be part of the discussion is in fact an important argument in favor of prioritizing the needs of humans.
Behind the veil, I could be a chicken. If youâve already decided only humans are moral patients (and so I already know I am human), the rest of the thought exercise does not seem to add much.
I took Henryâs argument to point to a special moral duty to oneâs loved ones. I have, for instance, special duties to my son. That makes certain actions appropriate or inappropriate for me; I am not going to spend money needed to save my sonâs life on advancing animal welfare. Telling me I should do would be pressuring me to break the special moral duty to my son. But I canât expect other people to attach any special weight to the fact that he is my son. Thatâs why I reacted as I did.
But the only thing the chicken will say is âbawk cluck cluck bawkâ. It seems relevant that this is neither an argument for its own welfare nor the welfare of anyone else. Claude Sonnet, GPT-4o, Gemini, LLama⊠all of these can at least make arguments in favor of a particular social contract and plausibly could uphold their end of the bargain if allowed to make notes for themselves that they would see before every conversation.
I take you, as a moral patient, to have value in your son. The extra value you place on your sonâs life is a value I would count when summing up utilities for you. Also, I would consider it a predictive factor in estimating your behavior. I personally donât think there is such a thing as âmoral rulesâ by which it makes sense to judge you for valuing or not valuing your child above other humans with whom you are in an implied social contract. Which is to say, I am a moral anti-realist.
Would you say children donât matter in themselves (only indirectly through others, like their parents or society more generally), when theyâre too young to âuphold their end of the bargain if allowed to make notes for themselves that they would see before every conversationâ?
I considered chickens under different contractualist views here:
Should our actions be justifiable to chickens, real or hypothetical trustees for them (Scanlon, 1998, p.183), or idealized rational versions of them? If yes, then chickens could be covered by contractualism, and whatâs at stake for them seems reasonably large, given points 1 and 2 and their severe suffering on factory farms. See also the last two sections, on contractualist protections for animals and future people, in Ashford and Mulgan, 2018.
Could the capacity to mount reasonable complaints be enough to be covered under contractualism? Can chickens actually mount reasonable complaints? If yes to both, then chickens could be covered by contractualism. Chickens can and do complain about their situations and mistreatment in their own ways (vocalizations i.e. gakel-calls, feelings of unpleasantness and aversion, attempts to avoid, etc.), and what makes a complaint reasonable could just be whether the reasons for the complaint are strong enough relative to other reasons (e.g. under the Parfitâs Complaint Model or Scanlonâs modified version, described in Scanlon, 1998, p.229), which does not require (much) rationality on the part of the complainant. Severe suffering, like what factory farmed chickens endure, seems like a relatively strong reason for complaint.
What if the mother wasnât there (say she is no longer alive) and it was just the dying baby? The only thing the baby would say is âwah wah wahâ which is neither an argument for its own welfare nor the welfare of anyone else.
(Iâm trying to demonstrate that the ability to speak up for yourself shouldnât be a criterion in determining the strength of your moral rights...).
I would also add that animals do speak up for themselves. Some of our own arguments for our own welfare are very simple, or bottom out in simple claims like âthis hurts!â. Animal distress calls can effectively express âthis hurts!â. So, other animals plausibly do make (very simple) arguments for their own welfare or better treatment, we just need to try to understand what theyâre communicating.
Yes, the more complex take on the issue is to extrapolate. You can extrapolate the limited awareness of the chicken will never expand. You can extrapolate the child could grow into an adult who cared about their life in a rich meaningful way. Furthermore, you can extrapolate that this adult would be part of the category of individuals with whom you hold an implied social contract, and thus have a duty to respect and protect.
Also, see my other comments elsewhere on this page for more disagreements with your view.
Okay, this is rough and incomplete, but better to answer sooner than keep trying to find better words.
Not just contractualism. I think the cluster of (contractualism, justice, fairness, governance-design) is important, especially for arguing against majority-vs-minority situations, but itâs only part of the picture.
Important to also consider the entity in question, itâs preferences. Itâs appreciation of life and its potential for suffering. So in part I do agree with some of the pro-pleasure/âanti-suffering ideas, but with important differences that Iâll try to explain.
Alongside this, also the values I mentioned in my other comment.
I would argue that there should be some weighting on something which does somewhat correlate with brain complexity, in the context of self and world modeling.
For an entity to experience what I would call suffering, I think it can be argued that there must be a sufficiently complex computation (potentially, but not necessarily, running on biological neurons) associated with a process which can plausibly be tied to this self model.
There must be something which is running this suffering calculation.
This is not distributed evenly throughout the brain, itâs a calculation performed by certain specific areas within the brain. I would not expect someone with a lesion in their visual cortex to be any less capable of suffering. I would expect someone with lessons in their prefrontal cortex, basal ganglia, or prefrontal-cortex-associated area of the cerebellum to have deficits in suffering capacity. But even then, not all of the prefrontal cortex is involved, only specific parts.
I donât think suffering happens in sensory neurons receptive to aversive stimuli. I donât think an agent choosing to avoid aversive stimuli or act towards self-preservation is sufficient for suffering.
Ă think I need a different word than suffering to describe a humanâs experience. I want to say that an insect doesnât suffer, a dog does, but a human does yet an additional more important kind of suffering thing than a dog does. It is this emergent qualitative difference due to expansion and complexification of relevant brain areas which I think leads to humans having a wider richer set of internal mental experiences than other animals.
Imagine a nociceptive neuron alone in a petri dish. A chemical is added to the liquid medium that causes the neuron to fire action potentials. Is this neuron suffering? Clearly not. It is fulfilling its duty, transmitting a message. The programs instantiated within it by its phenotype and proteome do not suffer. Those programs arenât complex enough for a concept such as suffering. Even if they were, this isnât what suffering would be like for them. The nociceptive neuron thrives on response to the opportunity to do the job it has evolved for.
So what would be a minimum circuit for aversion? There needs to be quite a few neurons wired up into a specific network pattern within a central nervous system to interpret an incoming sensory signal, and assign it a positive or negative reaction. Far more central nervous system neurons to create a worldview and predictive self-model which can create the pattern of computation necessary for an entity who perceives themself to suffer. As we can see in humans, even though a particular pain-related sensory neuron firing isnât enough to induce suffering. Many people deliberately stimulate some of their pain-related sensory neurons in the course of pleasure-seeking activities. To contribute to suffering, the sensory information needs to be interpreted as such by a central processing network which creates a suffering-signal-pattern in response to the aversive-sensory-stimuli signal pattern.
Consider a simpler circuit in the human body: the spinal reflex circuit. The spinal reflex circuit enables us to react to aversive stimuli (e.g. heat) faster than is possible for our brains to perceive it. The loop goes from the sensory neuron, in to the spinal cord, through some interneurons, and then directly to output motor neurons. Before the signal has made it to the brain, the muscles are moving in response to the spinal reflex, contracting the limb. I argue that even though this is a behavioral output in reaction to aversive sensory stimuli, there is no suffering in that loop. It is too simple. Itâs just a simple program like a thermostat. The suffering only happens in the brain once the brain perceives the sensory information and interprets it as a pattern that it associates with suffering.
I think that the reactions of creatures as simple as shrimp and fruit flies are much closer to a spinal reflex than to a predictive self with a concept of suffering. I think that imagining a fruit fly to be suffering is imagining that there is more âperceiverâ there, more âselfâ there than is in fact the case. The fruit fly is in fact closer to being a simple machine than it is to being a tiny person.
The strategic landscape as I see it
I believe we are at a hinge in history, where everything we do matters primarily insofar as it channels through AI risk and development trajectories. In five to ten years, I expect the world to be radically transformed, and all of humanityâs material woes to be over. Either we triumph, and it will be easy to afford âluxury charityâ like taking care of animals alongside eliminating poverty and disease, or we fail and the AI destroys the world. Thereâs no in-between, I donât expect any half-wins.
Some of my moral intuitions
I think we have to each depend on our moral intuitions to at least some extent as well. I feel like any theory taken to an extreme without that grounding goes to bad places quickly. I also think my point of view is easier to understand perhaps if Iâm trying to honestly lay out on the table what I feel to be true alongside my reasoning.
(assuming a healthy young person with many years ahead of them)
Torturing a million puppies for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million puppies for a hundred years to prevent one person from dying: maybe bad?
Torturing a 100 puppies for a year to prevent one young person from dying: good.
Torturing a million shrimp for a hundred years to prevent one person from stubbing their toe: maybe bad?
Torturing a million shrimp for a hundred years to prevent one person from dying: great!
Torturing a million chickens for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million chickens for a hundred years to prevent one person from dying: good.
Torturing a million chickens for a hundred years to prevent one puppy from dying: bad.
Torturing a million chickens for a hundred years to prevent dogs from going extinct: great!
So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/âquality of ipsundrum across species.
Also, I have an intuition around qualitative distinctions that emerge from different quantities/âqualities/âinterpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.
Iâm also sympathetic to the view that no number of toe stubs aggregate to outweigh a lifetime of torture (maybe unless the toe stubs together feel like intense torture).
This moral theory just seems too ad-hoc and convoluted to me and ultimately leads to conclusions I find abhorrent i.e. animals canât speak up for themselves in a way that is clearly intelligible for humans so we are at liberty to inflict arbitrary amounts of suffering to them.
I personally find a utilitarian ethic much more intuitive and palatable, but Iâm not going to get into the weeds trying to convince you to change your underlying ethic.
Unfair to ask people to consider the ethics of their food while their loved ones are dying of malaria and TB.
I am having a hard time following this. We arenât, to my knowledge, asking people whose loved ones are at significant risk of dying of malaria and TB for money. AFAIK, weâre not asking them to prioritize animal welfare over their loved ones in non-finamcial ways either. Could you explain what specifically weâre asking of this class of people?
On top of Jasonâs point, this argument presupposes that animals are food and therefore not worthy of much if any moral concern, but there are many reasons to think animals are worthy of moral concern.
Are we not discussing the situation with them? What about a Rawlsian veil of ignorance? A social contract? If these people were in the same room with you, a mother holding her dying child in her arms, and you were holding a community meeting about whether to save her child or save a cage with some chickens in it⊠wouldnât she be expected to have a right to at least argue in favor of her childâs life?
The very fact that humans are able to be part of the discussion is in fact an important argument in favor of prioritizing the needs of humans.
Behind the veil, I could be a chicken. If youâve already decided only humans are moral patients (and so I already know I am human), the rest of the thought exercise does not seem to add much.
I took Henryâs argument to point to a special moral duty to oneâs loved ones. I have, for instance, special duties to my son. That makes certain actions appropriate or inappropriate for me; I am not going to spend money needed to save my sonâs life on advancing animal welfare. Telling me I should do would be pressuring me to break the special moral duty to my son. But I canât expect other people to attach any special weight to the fact that he is my son. Thatâs why I reacted as I did.
But the only thing the chicken will say is âbawk cluck cluck bawkâ. It seems relevant that this is neither an argument for its own welfare nor the welfare of anyone else. Claude Sonnet, GPT-4o, Gemini, LLama⊠all of these can at least make arguments in favor of a particular social contract and plausibly could uphold their end of the bargain if allowed to make notes for themselves that they would see before every conversation.
I take you, as a moral patient, to have value in your son. The extra value you place on your sonâs life is a value I would count when summing up utilities for you. Also, I would consider it a predictive factor in estimating your behavior. I personally donât think there is such a thing as âmoral rulesâ by which it makes sense to judge you for valuing or not valuing your child above other humans with whom you are in an implied social contract. Which is to say, I am a moral anti-realist.
Would you say children donât matter in themselves (only indirectly through others, like their parents or society more generally), when theyâre too young to âuphold their end of the bargain if allowed to make notes for themselves that they would see before every conversationâ?
I considered chickens under different contractualist views here:
Also see this article.
What if the mother wasnât there (say she is no longer alive) and it was just the dying baby? The only thing the baby would say is âwah wah wahâ which is neither an argument for its own welfare nor the welfare of anyone else.
(Iâm trying to demonstrate that the ability to speak up for yourself shouldnât be a criterion in determining the strength of your moral rights...).
I would also add that animals do speak up for themselves. Some of our own arguments for our own welfare are very simple, or bottom out in simple claims like âthis hurts!â. Animal distress calls can effectively express âthis hurts!â. So, other animals plausibly do make (very simple) arguments for their own welfare or better treatment, we just need to try to understand what theyâre communicating.
Agreed!
Yes, the more complex take on the issue is to extrapolate. You can extrapolate the limited awareness of the chicken will never expand. You can extrapolate the child could grow into an adult who cared about their life in a rich meaningful way. Furthermore, you can extrapolate that this adult would be part of the category of individuals with whom you hold an implied social contract, and thus have a duty to respect and protect.
Also, see my other comments elsewhere on this page for more disagreements with your view.
Iâm upvoting but disagree-voting. Thanks for engaging with the comments here!
Would you also extend this to fetuses, embryos, zygotes and even uncombined sperm cells and eggs? Is your position very pro-life and pro-natalist?
Okay, this is rough and incomplete, but better to answer sooner than keep trying to find better words.
Not just contractualism. I think the cluster of (contractualism, justice, fairness, governance-design) is important, especially for arguing against majority-vs-minority situations, but itâs only part of the picture.
Important to also consider the entity in question, itâs preferences. Itâs appreciation of life and its potential for suffering. So in part I do agree with some of the pro-pleasure/âanti-suffering ideas, but with important differences that Iâll try to explain.
Alongside this, also the values I mentioned in my other comment.
I would argue that there should be some weighting on something which does somewhat correlate with brain complexity, in the context of self and world modeling.
For an entity to experience what I would call suffering, I think it can be argued that there must be a sufficiently complex computation (potentially, but not necessarily, running on biological neurons) associated with a process which can plausibly be tied to this self model.
There must be something which is running this suffering calculation.
This is not distributed evenly throughout the brain, itâs a calculation performed by certain specific areas within the brain. I would not expect someone with a lesion in their visual cortex to be any less capable of suffering. I would expect someone with lessons in their prefrontal cortex, basal ganglia, or prefrontal-cortex-associated area of the cerebellum to have deficits in suffering capacity. But even then, not all of the prefrontal cortex is involved, only specific parts.
I donât think suffering happens in sensory neurons receptive to aversive stimuli. I donât think an agent choosing to avoid aversive stimuli or act towards self-preservation is sufficient for suffering.
Ă think I need a different word than suffering to describe a humanâs experience. I want to say that an insect doesnât suffer, a dog does, but a human does yet an additional more important kind of suffering thing than a dog does. It is this emergent qualitative difference due to expansion and complexification of relevant brain areas which I think leads to humans having a wider richer set of internal mental experiences than other animals.
Imagine a nociceptive neuron alone in a petri dish. A chemical is added to the liquid medium that causes the neuron to fire action potentials. Is this neuron suffering? Clearly not. It is fulfilling its duty, transmitting a message. The programs instantiated within it by its phenotype and proteome do not suffer. Those programs arenât complex enough for a concept such as suffering. Even if they were, this isnât what suffering would be like for them. The nociceptive neuron thrives on response to the opportunity to do the job it has evolved for.
So what would be a minimum circuit for aversion? There needs to be quite a few neurons wired up into a specific network pattern within a central nervous system to interpret an incoming sensory signal, and assign it a positive or negative reaction. Far more central nervous system neurons to create a worldview and predictive self-model which can create the pattern of computation necessary for an entity who perceives themself to suffer. As we can see in humans, even though a particular pain-related sensory neuron firing isnât enough to induce suffering. Many people deliberately stimulate some of their pain-related sensory neurons in the course of pleasure-seeking activities. To contribute to suffering, the sensory information needs to be interpreted as such by a central processing network which creates a suffering-signal-pattern in response to the aversive-sensory-stimuli signal pattern.
Consider a simpler circuit in the human body: the spinal reflex circuit. The spinal reflex circuit enables us to react to aversive stimuli (e.g. heat) faster than is possible for our brains to perceive it. The loop goes from the sensory neuron, in to the spinal cord, through some interneurons, and then directly to output motor neurons. Before the signal has made it to the brain, the muscles are moving in response to the spinal reflex, contracting the limb. I argue that even though this is a behavioral output in reaction to aversive sensory stimuli, there is no suffering in that loop. It is too simple. Itâs just a simple program like a thermostat. The suffering only happens in the brain once the brain perceives the sensory information and interprets it as a pattern that it associates with suffering.
I think that the reactions of creatures as simple as shrimp and fruit flies are much closer to a spinal reflex than to a predictive self with a concept of suffering. I think that imagining a fruit fly to be suffering is imagining that there is more âperceiverâ there, more âselfâ there than is in fact the case. The fruit fly is in fact closer to being a simple machine than it is to being a tiny person.
The strategic landscape as I see it
I believe we are at a hinge in history, where everything we do matters primarily insofar as it channels through AI risk and development trajectories. In five to ten years, I expect the world to be radically transformed, and all of humanityâs material woes to be over. Either we triumph, and it will be easy to afford âluxury charityâ like taking care of animals alongside eliminating poverty and disease, or we fail and the AI destroys the world. Thereâs no in-between, I donât expect any half-wins.
Some of my moral intuitions
I think we have to each depend on our moral intuitions to at least some extent as well. I feel like any theory taken to an extreme without that grounding goes to bad places quickly. I also think my point of view is easier to understand perhaps if Iâm trying to honestly lay out on the table what I feel to be true alongside my reasoning.
(assuming a healthy young person with many years ahead of them)
Torturing a million puppies for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million puppies for a hundred years to prevent one person from dying: maybe bad?
Torturing a 100 puppies for a year to prevent one young person from dying: good.
Torturing a million shrimp for a hundred years to prevent one person from stubbing their toe: maybe bad?
Torturing a million shrimp for a hundred years to prevent one person from dying: great!
Torturing a million chickens for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million chickens for a hundred years to prevent one person from dying: good.
Torturing a million chickens for a hundred years to prevent one puppy from dying: bad.
Torturing a million chickens for a hundred years to prevent dogs from going extinct: great!
Ok, I just read this post and the discussion on it (again, great insights from MichaelStJules). https://ââforum.effectivealtruism.org/ââposts/ââAvubGwD2xkCD4tGtd/ââonly-mammals-and-birds-are-sentient-according-to Ipsundrum is the concept I havenât had a word for, of the self-modeling feedback loops in the brain.
So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/âquality of ipsundrum across species.
Also, I have an intuition around qualitative distinctions that emerge from different quantities/âqualities/âinterpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.
Also this thread (and maybe especially my response) may be useful.
Iâm sympathetic to gradualism.
Iâm also sympathetic to the view that no number of toe stubs aggregate to outweigh a lifetime of torture (maybe unless the toe stubs together feel like intense torture).
This moral theory just seems too ad-hoc and convoluted to me and ultimately leads to conclusions I find abhorrent i.e. animals canât speak up for themselves in a way that is clearly intelligible for humans so we are at liberty to inflict arbitrary amounts of suffering to them.
I personally find a utilitarian ethic much more intuitive and palatable, but Iâm not going to get into the weeds trying to convince you to change your underlying ethic.
Can I push you on this a bit?
Sure