Arguments for fruit flies being about as likely to be more morally significant than humans as less
Note the median moral weight for fruit flies assuming a loguniform distribution (the type I prefer) is 0.00192 << 1. So I do not think the moral weight of fruit flies relative to humans being smaller than 1 is as likely as it being larger than 1.
I think itâs reasonable to say a fruit fly cannot remember things in the long term, and it cannot contemplate or ruminate, which is one of the worst aspects of negative experiences and pain. I think most people would prefer to have experiences of extreme pain and trauma erased from their lives.
Based on this analysis from Jason Schukraft, âmental time travel [âthe capacity to remember past events and imagine future eventsâ] seems to reduce the intensity of experiences in some circumstances and amplify the intensity of experiences in other circumstances. It is thus unclear whether animals that possess this ability have characteristically more or less intense valenced experiences overallâ (see this section for details).
A fruit fly lives a tiny fraction of the duration of a humanâs life, so it would have to experience its own life much faster.
The moral weights presented here have units QALY/âaQALY (QALY per âanimal QALYâ), and therefore they are not affected by differences in life expectancy between species. For example, a moral weight of 2 QALY/âcQALY (QALY per âchicken QALYâ) means that 2 T years of fully healthy human life are as valuable as T years of fully healthy chicken life.
A human can be considered an ensemble or family of different personalities and conscious processes. Each one of these may have moral significance, increasing the relative moral significance of a human.
I tend to agree. From Jasonâs analysis (see here), âspecies that experience a greater variety and/âor greater complexity of emotional states are, all else equal, capable of more intense positive and negative experiencesâ.
The more complex something is, typically, it is more valued in generic terms.
From the âKey Highlightsâ of Jasonâs analysis:
âSome aspects of cognitive sophistication appear to be positively correlated with intensity range; other aspects of cognitive sophistication appear to be negatively correlated with intensity rangeâ.
âAffective complexity [diversity and depth of emotional sensations an animal can experience] generally appears to be positively correlated with intensity rangeâ.
So I tend to agree with your point, and think this is a good argument for not trusting mean moral weights which are much larger than 1. For the luguniform distributions, my maximum mean moral weight is 3, which is not much larger than 1.
Humans form a network of social connections and social connections. When a human is lost, their loss is understood and grieved by many other humans, thus greatly increasing the overall negative effect of harm to a human compared to a fruit fly.
Humans have very few children relative to fruit fly, so they are likely value higher on an individual level by their families and communities.
I agree, and think this should be considered when comparing interventions. That being said, these points do not influence the moral weight, which is the ratio between the value of T years of fully healthy animal life and T years of fully healthy animal life (i.e. the duration of the experiences is normalised).
A final thought is that we donât know with very high confidence that animals are conscious in the way that we care about morally, but we know this for sure with humans. For that reason, we would be safer to prefer to save humans first, in case we were wrong about animals having conscious experiences in the first place.
This is taken into account here by multiply the moral weight given moral patienthood by:
The probability of the beings of the species having moral patienthood, as defined by Luke Muehlhauser here, which was set to the values provided in this section of Open Philanthropyâs 2017 Report on Consciousness and Moral Patienthood.
In terms of your summary:
In summary the most relevant factors for moral significance are likely the degree of social embeddedness, the experience of higher order emotions and complexity in general, the ability to grieve, long lives, and long memories, which strongly implies that humans are more morally significant than all or most other animals.
I think your conclusion may well be right, but there is lots of uncertainty, so I do not think there is a âstrong implicationâ. For example, I think the likelihood of the moral weight being larger than 1 is at least 10 %, so the mean moral weight should be larger than 0.1.
As a disclaimer, I came in with the preconception that one should assign near-zero probability of animals being of more moral relevance than humans.
After reading the arguments, I have found little to no convincing arguments contradicting this.
Itâs true that we should be uncertain as to how animals experience the world. However, I donât feel that the uncertainty in moral value should be thought of as ever exceeding humanâs moral value.
To illustrate my current understanding of the best way to think about this topic, I think all your probability distributions should probably be modeled as never exceeding 1 for every animal, as the probability of such an outcome is so low itâs not worth considering. I think of it like the probability that you can build a perpetual energy-creating machine violating the laws of physics, or the probability that tomorrow the sun does not rise because the earth stopped rotating.
Perhaps, it could analogized as the same moral probability that causing suffering is a good thing, all things considered. One might argue that the human brain is extremely complicated, and morality is complicated, so we should put some weight on moral views that prefer to cause infinite suffering for eternity. Perhaps one could argue that some people enjoy causing others to suffer, and they might be right, and so suffering might be intrinsically good. I think this argument has about as much supporting evidence as the concept that animals could be more morally relevant than people. However, again, I would say the probability of such an outcome is so low itâs not worth considering.
Although itâs true we do not know the details of how animals experience consciousness, this is not enough to overturn the intuition all humans share about the morality of killing people versus animalsâone is simply entirely different than another, and there is no instance in which it is better to kill an animal than a person. This conception has apparently been held constant for many cultures throughout human history. In some cases some animals were revered as gods, but this was less about the animals and more about the gods. In some cases animals and living things were seen as equally valuable as humans. I think this is unlikely, but not impossible, but the key point is that killing was seen as wrong in all cases, and not that animals were seen as more valuable than humans.
Suffering is not the only relevant moral consideration. See âThe Righteous Mindâ by Jonathon Haidtâhumans probably share a few more moral foundations than purely care/âharm, including authority, fairness, sanctity, etc. Some may view these as equally morally relevant. My point is here, itâs questionable whether we have equal moral responsibility over nonhuman animals as we have to humans, depending on how you construct your moral frameworks. If you look at how human brains are wired, the foundations of our conceptions of morality are built with in-group vs out-group. So, the moral status of animals based on understanding of human psychology which is our best way to guess at a âcorrectâ moral framework would indicate that as things become less like us, our moral intuitions will guide us as valuing these things less.
I think you may have come to your probability distributions because you are a sequence thinker and are using your intuitions to argue for each part of a sequence which comes to some conclusion, where the proper thing to do when coming to some conclusion about whether to spend on an animal welfare charity or not is to use cluster-style thinking.
I hope that this is seen as a respectful difference in perspective and not at all a personal attack. I think it is useful to question these sort of assumptions to make moral progress, but I also think we need a lot of evidence to overturn the assumption that humans are more or equally morally relevant than animals, in large part due to the pre-existing moral intuitions we all probably share. There donât appear to be sufficient arguments out there to overturn this position.
Okay, that was enough philosophizing, let me put in a few more points in favor of my position here:
Most people I know that are smarter than me believe humans are more morally significantly than animal. I know of zero people seriously arguing the opposite side
If morality is actually all fake and a human invention with no objective truth to it, then humans and animals will both be worth zero, and I will still be correct.
The actual actions of people who argue animals are more morally relevant than humans is not to kill people to save animals, so thereâs probably no-one who sincerely, deep down believes this
People tend to anthropomorphize other things like teddy bears and Roombas and things like that, and mistakenly assign them some moral worth until they think about it more. Therefore, our intuitions can tend to guide us to incorrect conclusions about what is morally worthwhile.
Note the median moral weight for fruit flies assuming a loguniform distribution (the type I prefer) is 0.00192 << 1. So I do not think the moral weight of fruit flies relative to humans being smaller than 1 is as likely as it being larger than 1.
Based on this analysis from Jason Schukraft, âmental time travel [âthe capacity to remember past events and imagine future eventsâ] seems to reduce the intensity of experiences in some circumstances and amplify the intensity of experiences in other circumstances. It is thus unclear whether animals that possess this ability have characteristically more or less intense valenced experiences overallâ (see this section for details).
The moral weights presented here have units QALY/âaQALY (QALY per âanimal QALYâ), and therefore they are not affected by differences in life expectancy between species. For example, a moral weight of 2 QALY/âcQALY (QALY per âchicken QALYâ) means that 2 T years of fully healthy human life are as valuable as T years of fully healthy chicken life.
I tend to agree. From Jasonâs analysis (see here), âspecies that experience a greater variety and/âor greater complexity of emotional states are, all else equal, capable of more intense positive and negative experiencesâ.
From the âKey Highlightsâ of Jasonâs analysis:
So I tend to agree with your point, and think this is a good argument for not trusting mean moral weights which are much larger than 1. For the luguniform distributions, my maximum mean moral weight is 3, which is not much larger than 1.
I agree, and think this should be considered when comparing interventions. That being said, these points do not influence the moral weight, which is the ratio between the value of T years of fully healthy animal life and T years of fully healthy animal life (i.e. the duration of the experiences is normalised).
This is taken into account here by multiply the moral weight given moral patienthood by:
In terms of your summary:
I think your conclusion may well be right, but there is lots of uncertainty, so I do not think there is a âstrong implicationâ. For example, I think the likelihood of the moral weight being larger than 1 is at least 10 %, so the mean moral weight should be larger than 0.1.
As a disclaimer, I came in with the preconception that one should assign near-zero probability of animals being of more moral relevance than humans.
After reading the arguments, I have found little to no convincing arguments contradicting this.
Itâs true that we should be uncertain as to how animals experience the world. However, I donât feel that the uncertainty in moral value should be thought of as ever exceeding humanâs moral value.
To illustrate my current understanding of the best way to think about this topic, I think all your probability distributions should probably be modeled as never exceeding 1 for every animal, as the probability of such an outcome is so low itâs not worth considering. I think of it like the probability that you can build a perpetual energy-creating machine violating the laws of physics, or the probability that tomorrow the sun does not rise because the earth stopped rotating.
Perhaps, it could analogized as the same moral probability that causing suffering is a good thing, all things considered. One might argue that the human brain is extremely complicated, and morality is complicated, so we should put some weight on moral views that prefer to cause infinite suffering for eternity. Perhaps one could argue that some people enjoy causing others to suffer, and they might be right, and so suffering might be intrinsically good. I think this argument has about as much supporting evidence as the concept that animals could be more morally relevant than people. However, again, I would say the probability of such an outcome is so low itâs not worth considering.
Although itâs true we do not know the details of how animals experience consciousness, this is not enough to overturn the intuition all humans share about the morality of killing people versus animalsâone is simply entirely different than another, and there is no instance in which it is better to kill an animal than a person. This conception has apparently been held constant for many cultures throughout human history. In some cases some animals were revered as gods, but this was less about the animals and more about the gods. In some cases animals and living things were seen as equally valuable as humans. I think this is unlikely, but not impossible, but the key point is that killing was seen as wrong in all cases, and not that animals were seen as more valuable than humans.
Suffering is not the only relevant moral consideration. See âThe Righteous Mindâ by Jonathon Haidtâhumans probably share a few more moral foundations than purely care/âharm, including authority, fairness, sanctity, etc. Some may view these as equally morally relevant. My point is here, itâs questionable whether we have equal moral responsibility over nonhuman animals as we have to humans, depending on how you construct your moral frameworks. If you look at how human brains are wired, the foundations of our conceptions of morality are built with in-group vs out-group. So, the moral status of animals based on understanding of human psychology which is our best way to guess at a âcorrectâ moral framework would indicate that as things become less like us, our moral intuitions will guide us as valuing these things less.
I think you may have come to your probability distributions because you are a sequence thinker and are using your intuitions to argue for each part of a sequence which comes to some conclusion, where the proper thing to do when coming to some conclusion about whether to spend on an animal welfare charity or not is to use cluster-style thinking.
I hope that this is seen as a respectful difference in perspective and not at all a personal attack. I think it is useful to question these sort of assumptions to make moral progress, but I also think we need a lot of evidence to overturn the assumption that humans are more or equally morally relevant than animals, in large part due to the pre-existing moral intuitions we all probably share. There donât appear to be sufficient arguments out there to overturn this position.
Okay, that was enough philosophizing, let me put in a few more points in favor of my position here:
Most people I know that are smarter than me believe humans are more morally significantly than animal. I know of zero people seriously arguing the opposite side
If morality is actually all fake and a human invention with no objective truth to it, then humans and animals will both be worth zero, and I will still be correct.
The actual actions of people who argue animals are more morally relevant than humans is not to kill people to save animals, so thereâs probably no-one who sincerely, deep down believes this
People tend to anthropomorphize other things like teddy bears and Roombas and things like that, and mistakenly assign them some moral worth until they think about it more. Therefore, our intuitions can tend to guide us to incorrect conclusions about what is morally worthwhile.
Thanks for clarifying!