Yes, the more complex take on the issue is to extrapolate. You can extrapolate the limited awareness of the chicken will never expand. You can extrapolate the child could grow into an adult who cared about their life in a rich meaningful way. Furthermore, you can extrapolate that this adult would be part of the category of individuals with whom you hold an implied social contract, and thus have a duty to respect and protect.
Also, see my other comments elsewhere on this page for more disagreements with your view.
Okay, this is rough and incomplete, but better to answer sooner than keep trying to find better words.
Not just contractualism. I think the cluster of (contractualism, justice, fairness, governance-design) is important, especially for arguing against majority-vs-minority situations, but it’s only part of the picture.
Important to also consider the entity in question, it’s preferences. It’s appreciation of life and its potential for suffering. So in part I do agree with some of the pro-pleasure/anti-suffering ideas, but with important differences that I’ll try to explain.
Alongside this, also the values I mentioned in my other comment.
I would argue that there should be some weighting on something which does somewhat correlate with brain complexity, in the context of self and world modeling.
For an entity to experience what I would call suffering, I think it can be argued that there must be a sufficiently complex computation (potentially, but not necessarily, running on biological neurons) associated with a process which can plausibly be tied to this self model.
There must be something which is running this suffering calculation.
This is not distributed evenly throughout the brain, it’s a calculation performed by certain specific areas within the brain. I would not expect someone with a lesion in their visual cortex to be any less capable of suffering. I would expect someone with lessons in their prefrontal cortex, basal ganglia, or prefrontal-cortex-associated area of the cerebellum to have deficits in suffering capacity. But even then, not all of the prefrontal cortex is involved, only specific parts.
I don’t think suffering happens in sensory neurons receptive to aversive stimuli. I don’t think an agent choosing to avoid aversive stimuli or act towards self-preservation is sufficient for suffering.
Í think I need a different word than suffering to describe a human’s experience. I want to say that an insect doesn’t suffer, a dog does, but a human does yet an additional more important kind of suffering thing than a dog does. It is this emergent qualitative difference due to expansion and complexification of relevant brain areas which I think leads to humans having a wider richer set of internal mental experiences than other animals.
Imagine a nociceptive neuron alone in a petri dish. A chemical is added to the liquid medium that causes the neuron to fire action potentials. Is this neuron suffering? Clearly not. It is fulfilling its duty, transmitting a message. The programs instantiated within it by its phenotype and proteome do not suffer. Those programs aren’t complex enough for a concept such as suffering. Even if they were, this isn’t what suffering would be like for them. The nociceptive neuron thrives on response to the opportunity to do the job it has evolved for.
So what would be a minimum circuit for aversion? There needs to be quite a few neurons wired up into a specific network pattern within a central nervous system to interpret an incoming sensory signal, and assign it a positive or negative reaction. Far more central nervous system neurons to create a worldview and predictive self-model which can create the pattern of computation necessary for an entity who perceives themself to suffer. As we can see in humans, even though a particular pain-related sensory neuron firing isn’t enough to induce suffering. Many people deliberately stimulate some of their pain-related sensory neurons in the course of pleasure-seeking activities. To contribute to suffering, the sensory information needs to be interpreted as such by a central processing network which creates a suffering-signal-pattern in response to the aversive-sensory-stimuli signal pattern.
Consider a simpler circuit in the human body: the spinal reflex circuit. The spinal reflex circuit enables us to react to aversive stimuli (e.g. heat) faster than is possible for our brains to perceive it. The loop goes from the sensory neuron, in to the spinal cord, through some interneurons, and then directly to output motor neurons. Before the signal has made it to the brain, the muscles are moving in response to the spinal reflex, contracting the limb. I argue that even though this is a behavioral output in reaction to aversive sensory stimuli, there is no suffering in that loop. It is too simple. It’s just a simple program like a thermostat. The suffering only happens in the brain once the brain perceives the sensory information and interprets it as a pattern that it associates with suffering.
I think that the reactions of creatures as simple as shrimp and fruit flies are much closer to a spinal reflex than to a predictive self with a concept of suffering. I think that imagining a fruit fly to be suffering is imagining that there is more ‘perceiver’ there, more ‘self’ there than is in fact the case. The fruit fly is in fact closer to being a simple machine than it is to being a tiny person.
The strategic landscape as I see it
I believe we are at a hinge in history, where everything we do matters primarily insofar as it channels through AI risk and development trajectories. In five to ten years, I expect the world to be radically transformed, and all of humanity’s material woes to be over. Either we triumph, and it will be easy to afford ‘luxury charity’ like taking care of animals alongside eliminating poverty and disease, or we fail and the AI destroys the world. There’s no in-between, I don’t expect any half-wins.
Some of my moral intuitions
I think we have to each depend on our moral intuitions to at least some extent as well. I feel like any theory taken to an extreme without that grounding goes to bad places quickly. I also think my point of view is easier to understand perhaps if I’m trying to honestly lay out on the table what I feel to be true alongside my reasoning.
(assuming a healthy young person with many years ahead of them)
Torturing a million puppies for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million puppies for a hundred years to prevent one person from dying: maybe bad?
Torturing a 100 puppies for a year to prevent one young person from dying: good.
Torturing a million shrimp for a hundred years to prevent one person from stubbing their toe: maybe bad?
Torturing a million shrimp for a hundred years to prevent one person from dying: great!
Torturing a million chickens for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million chickens for a hundred years to prevent one person from dying: good.
Torturing a million chickens for a hundred years to prevent one puppy from dying: bad.
Torturing a million chickens for a hundred years to prevent dogs from going extinct: great!
So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/quality of ipsundrum across species.
Also, I have an intuition around qualitative distinctions that emerge from different quantities/qualities/interpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.
I’m also sympathetic to the view that no number of toe stubs aggregate to outweigh a lifetime of torture (maybe unless the toe stubs together feel like intense torture).
This moral theory just seems too ad-hoc and convoluted to me and ultimately leads to conclusions I find abhorrent i.e. animals can’t speak up for themselves in a way that is clearly intelligible for humans so we are at liberty to inflict arbitrary amounts of suffering to them.
I personally find a utilitarian ethic much more intuitive and palatable, but I’m not going to get into the weeds trying to convince you to change your underlying ethic.
Yes, the more complex take on the issue is to extrapolate. You can extrapolate the limited awareness of the chicken will never expand. You can extrapolate the child could grow into an adult who cared about their life in a rich meaningful way. Furthermore, you can extrapolate that this adult would be part of the category of individuals with whom you hold an implied social contract, and thus have a duty to respect and protect.
Also, see my other comments elsewhere on this page for more disagreements with your view.
I’m upvoting but disagree-voting. Thanks for engaging with the comments here!
Would you also extend this to fetuses, embryos, zygotes and even uncombined sperm cells and eggs? Is your position very pro-life and pro-natalist?
Okay, this is rough and incomplete, but better to answer sooner than keep trying to find better words.
Not just contractualism. I think the cluster of (contractualism, justice, fairness, governance-design) is important, especially for arguing against majority-vs-minority situations, but it’s only part of the picture.
Important to also consider the entity in question, it’s preferences. It’s appreciation of life and its potential for suffering. So in part I do agree with some of the pro-pleasure/anti-suffering ideas, but with important differences that I’ll try to explain.
Alongside this, also the values I mentioned in my other comment.
I would argue that there should be some weighting on something which does somewhat correlate with brain complexity, in the context of self and world modeling.
For an entity to experience what I would call suffering, I think it can be argued that there must be a sufficiently complex computation (potentially, but not necessarily, running on biological neurons) associated with a process which can plausibly be tied to this self model.
There must be something which is running this suffering calculation.
This is not distributed evenly throughout the brain, it’s a calculation performed by certain specific areas within the brain. I would not expect someone with a lesion in their visual cortex to be any less capable of suffering. I would expect someone with lessons in their prefrontal cortex, basal ganglia, or prefrontal-cortex-associated area of the cerebellum to have deficits in suffering capacity. But even then, not all of the prefrontal cortex is involved, only specific parts.
I don’t think suffering happens in sensory neurons receptive to aversive stimuli. I don’t think an agent choosing to avoid aversive stimuli or act towards self-preservation is sufficient for suffering.
Í think I need a different word than suffering to describe a human’s experience. I want to say that an insect doesn’t suffer, a dog does, but a human does yet an additional more important kind of suffering thing than a dog does. It is this emergent qualitative difference due to expansion and complexification of relevant brain areas which I think leads to humans having a wider richer set of internal mental experiences than other animals.
Imagine a nociceptive neuron alone in a petri dish. A chemical is added to the liquid medium that causes the neuron to fire action potentials. Is this neuron suffering? Clearly not. It is fulfilling its duty, transmitting a message. The programs instantiated within it by its phenotype and proteome do not suffer. Those programs aren’t complex enough for a concept such as suffering. Even if they were, this isn’t what suffering would be like for them. The nociceptive neuron thrives on response to the opportunity to do the job it has evolved for.
So what would be a minimum circuit for aversion? There needs to be quite a few neurons wired up into a specific network pattern within a central nervous system to interpret an incoming sensory signal, and assign it a positive or negative reaction. Far more central nervous system neurons to create a worldview and predictive self-model which can create the pattern of computation necessary for an entity who perceives themself to suffer. As we can see in humans, even though a particular pain-related sensory neuron firing isn’t enough to induce suffering. Many people deliberately stimulate some of their pain-related sensory neurons in the course of pleasure-seeking activities. To contribute to suffering, the sensory information needs to be interpreted as such by a central processing network which creates a suffering-signal-pattern in response to the aversive-sensory-stimuli signal pattern.
Consider a simpler circuit in the human body: the spinal reflex circuit. The spinal reflex circuit enables us to react to aversive stimuli (e.g. heat) faster than is possible for our brains to perceive it. The loop goes from the sensory neuron, in to the spinal cord, through some interneurons, and then directly to output motor neurons. Before the signal has made it to the brain, the muscles are moving in response to the spinal reflex, contracting the limb. I argue that even though this is a behavioral output in reaction to aversive sensory stimuli, there is no suffering in that loop. It is too simple. It’s just a simple program like a thermostat. The suffering only happens in the brain once the brain perceives the sensory information and interprets it as a pattern that it associates with suffering.
I think that the reactions of creatures as simple as shrimp and fruit flies are much closer to a spinal reflex than to a predictive self with a concept of suffering. I think that imagining a fruit fly to be suffering is imagining that there is more ‘perceiver’ there, more ‘self’ there than is in fact the case. The fruit fly is in fact closer to being a simple machine than it is to being a tiny person.
The strategic landscape as I see it
I believe we are at a hinge in history, where everything we do matters primarily insofar as it channels through AI risk and development trajectories. In five to ten years, I expect the world to be radically transformed, and all of humanity’s material woes to be over. Either we triumph, and it will be easy to afford ‘luxury charity’ like taking care of animals alongside eliminating poverty and disease, or we fail and the AI destroys the world. There’s no in-between, I don’t expect any half-wins.
Some of my moral intuitions
I think we have to each depend on our moral intuitions to at least some extent as well. I feel like any theory taken to an extreme without that grounding goes to bad places quickly. I also think my point of view is easier to understand perhaps if I’m trying to honestly lay out on the table what I feel to be true alongside my reasoning.
(assuming a healthy young person with many years ahead of them)
Torturing a million puppies for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million puppies for a hundred years to prevent one person from dying: maybe bad?
Torturing a 100 puppies for a year to prevent one young person from dying: good.
Torturing a million shrimp for a hundred years to prevent one person from stubbing their toe: maybe bad?
Torturing a million shrimp for a hundred years to prevent one person from dying: great!
Torturing a million chickens for a hundred years to prevent one person from stubbing their toe: bad.
Torturing a million chickens for a hundred years to prevent one person from dying: good.
Torturing a million chickens for a hundred years to prevent one puppy from dying: bad.
Torturing a million chickens for a hundred years to prevent dogs from going extinct: great!
Ok, I just read this post and the discussion on it (again, great insights from MichaelStJules). https://forum.effectivealtruism.org/posts/AvubGwD2xkCD4tGtd/only-mammals-and-birds-are-sentient-according-to Ipsundrum is the concept I haven’t had a word for, of the self-modeling feedback loops in the brain.
So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/quality of ipsundrum across species.
Also, I have an intuition around qualitative distinctions that emerge from different quantities/qualities/interpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.
Also this thread (and maybe especially my response) may be useful.
I’m sympathetic to gradualism.
I’m also sympathetic to the view that no number of toe stubs aggregate to outweigh a lifetime of torture (maybe unless the toe stubs together feel like intense torture).
This moral theory just seems too ad-hoc and convoluted to me and ultimately leads to conclusions I find abhorrent i.e. animals can’t speak up for themselves in a way that is clearly intelligible for humans so we are at liberty to inflict arbitrary amounts of suffering to them.
I personally find a utilitarian ethic much more intuitive and palatable, but I’m not going to get into the weeds trying to convince you to change your underlying ethic.