I feel like animal welfare is based on incorrect philosophical arguments. I do not think that animals (sentient) suffer in the same sense that humans (sapient) suffer. I do not believe that any amount of the qualitatively different animal suffering adds up to any amount of human suffering. They are non-commensurate. For more detail, see here: https://www.lesswrong.com/posts/Htu55gzoiYHS6TREB/sentience-matters?commentId=wusCgxN9qK8HzLAiw In accordance with this view, I argue that not a single dollar should be spent on animal suffering while there is still a single human at risk of dying from malnourishment or preventable disease.
Aside from this argument, I believe there is a separate argument from urgency. I think that the immediate concerns of AI outweigh any possible impact from any spending on animal suffering. The continued existence of humanity and the prevention of suffering of sapient digital beings are both at a hinge in history, where the effort put into these causes now matters far more than the non-urgent cause of animal suffering. Animal suffering is increasing only relatively slowly, whereas these other causes may explode in importance in a matter of a few months. No sense in reupholstering the backseat of your car while your car is speeding down a steep mountain road with no one at the wheel...
Does urgency (point 2) apply to global health specifically, given the debate topic of animal welfare vs global health?
Maybe we can consider biorisk, including biorisk from TAI (EDIT: and other ways we might all die, and other GCRs), to fit inside global health, but I don’t think that’s what’s usually intended.
Global health is about the lives of humans and human suffering. It seems to me that AI safety is the #1 global health issue at large in our current world.
But considering that you mean ‘health interventions for poor people’, how do you separate that from AI risk? If you have good reason to believe that if you fail to act then the person will be killed in less than a decade, and so will all animals, all life on Earth… Seems odd to me to be putting ‘treat curable diseases of human population x’ into a different bucket than ‘keep human population x from being murdered’. Aren’t these both health interventions? Don’t they both deliver QUALYs?
I agree you can consider them “health interventions”, but I think what people have in mind by global health in general and in this debate are mostly GiveWell recommendations, and maybe other cause areas in Open Phil’s Global Health and Wellbeing focus areas, which are separate from global catastrophic risks (GCRs). Maybe the line is somewhat artificial.
One reason to separate GCRs from global health is that GCRs and GCR interventions seem very one-shot,[1] poorer evidenced and much more speculative than many global health interventions, like GiveWell recommendations. If you want to be more sure you’re making a difference,[2] GiveWell recommendations seem better for that.
Betting around whether a global catastrophe occurs at all, with highly correlated individual outcomes, not individual deaths separately, e.g. one case of malaria prevented.
Although perhaps a very different difference from what GiveWell estimates, since they don’t account for the possibility that we all get killed by AI, or that the lives we save today go on for hundreds of years due to technological advances.
Well, if AI goes well, things on my short list for what to focus on next with the incredible power unlocked by this unprecedentedly large acceleration in technological development are: alleviating all material poverty, curing all diseases, extending human life, and (as a lower priority) ending cruel factory farming practices. This critical juncture isn’t just about preventing a harm, it’s a fork in the road that goes either to catastrophe or huge wins on every current challenge. Of course, new challenges then arise, such as questions of offense-defense balance in technological advancements, rights of digital beings, government surveillance, etc.
I think this level of incommensurability is both contradictory with folk ethics(most people I speak with agree that preventing animal torture is more important than preventing mild human headache) and it’s a pretty confident view that assigns a very low weight to the animals’ interests. Do you think our reasoning in moral philosophy and understanding of animal biology is reliable enough to be that confident?
I think you misunderstand my point. I would also say that minor temporary inconvenience is incommensurate with severe crippling harm or death. So I would say that no amount of animal deaths or torture could be commensurate with the life of a human. And I would say that no amount of animal headaches could be commensurate with a human headache.
Here’s a discussion I had with Claude Sonnet 3.5 on the subject:
Nathan:
On the question of animal welfare and folk ethics. I want to explore what you estimate a typical viewpoint for most humans would be on the following ethical dilemma: Suppose a room contains a human child, and a container of shrimp. You must choose to kill either the human child or all the shrimp. Which should you choose?
Claude Sonnet 3.5:
When considering this ethical dilemma from the perspective of what a typical human viewpoint might be, I would estimate that most people would choose to kill the shrimp rather than the human child. Here’s a breakdown of the reasoning that might lead to this common stance:
1. Human life valuation: In most human societies, there’s a strong cultural and emotional emphasis placed on the value of human life, especially that of children. This often stems from our biological instincts as a species, as well as societal norms and moral frameworks.
2. Cognitive capabilities: Humans generally attribute higher moral status to beings with more advanced cognitive capabilities. A human child has greater potential for complex thought, self-awareness, and future planning compared to shrimp.
3. Emotional capacity: Most people would assume that a human child has a much greater capacity for suffering and emotional experiences than shrimp do. This often factors into moral considerations.
4. Familial and social impact: The death of a human child would likely have a significant emotional impact on family members and the broader community, which many would weigh heavily in their decision.
5. Potential and future value: A human child has the potential for a long life span and could contribute to society in various ways. This future potential is often valued highly in ethical considerations.
6. Perception of shrimp: Many people view shrimp (and other invertebrates) as having limited sentience or capacity for suffering. Whether scientifically accurate or not, this perception often influences ethical decisions.
7. Numbers vs. individual value: While there may be many shrimp in the container, most people would likely value the single human life more highly than the collective lives of the shrimp.
It’s important to note that this estimated typical viewpoint doesn’t necessarily align with all philosophical or ethical frameworks. Some ethical systems, particularly those emphasizing animal welfare or the reduction of suffering for all sentient beings, might approach this dilemma differently.
Additionally, individual responses to this dilemma could vary based on cultural background, personal beliefs, or specific ethical frameworks a person adheres to. However, in most human societies, the predominant view would likely favor saving the human child.
Nathan:
What would you argue that most people would support? Which would be considered more legal under international law?
Claude:
Based on my understanding of common ethical intuitions and societal values, I would argue that most people would likely support saving the human child over 10 million shrimp, even though the decision might become more difficult or conflicting for some due to the large number involved.
The reasons for this estimated majority view include:
1. Deeply ingrained human-centric ethics in most cultures 2. The high value placed on human life, especially children’s lives 3. The perceived greater capacity for suffering and complex experiences in humans 4. The social and emotional impact of a child’s death on family and community 5. The future potential and societal contributions of a human child
However, it’s important to note that this would likely be a more contentious and debated decision than the original scenario with fewer shrimp.
Regarding international law, this specific scenario is not directly addressed in any international legal framework. However, we can make some inferences based on existing laws and principles:
1. Human rights: International human rights law, including the Universal Declaration of Human Rights and various UN conventions, places supreme importance on human life and dignity. The right to life for humans is a fundamental principle in international law.
2. Animal welfare: While there are some international agreements on animal welfare, these generally focus on preventing unnecessary cruelty rather than equating animal lives with human lives.
3. Child protection: There are specific international laws and conventions focused on protecting children’s rights and lives, such as the UN Convention on the Rights of the Child.
4. Environmental protection: While there are international laws protecting marine ecosystems, these typically focus on sustainable use and conservation rather than placing animal lives above human lives.
Given these factors, from a legal standpoint, international law would almost certainly prioritize the life of the human child over the lives of the shrimp. The legal systems of most countries, as well as international law, are fundamentally anthropocentric and place human life at the top of the moral hierarchy.
It’s worth noting that legal frameworks often lag behind evolving ethical discussions and may not always align with every individual’s moral views. However, in this case, both common moral intuitions and current legal structures would likely align in prioritizing the human child’s life.
“I do not believe that any amount of the qualitatively different animal suffering adds up to any amount of human suffering.”
I was responding to my interpretation of the sentence above. I agree that it’s a common position to assign infinitely higher importance to saving a human life compared to preventing any amount of animal suffering. My understanding of the quote above was that you made an even stronger claim since the expression is “any amount of human suffering”, which should include very low amounts of human suffering.
But I still think folk ethics on this issue is overconfident and doesn’t take moral uncertainty properly into account. I also think that kind of incommensurability claims face other more general theoretical problems. “Saving” a life is just another expression for extending it, since no intervention makes people immortal. That position would claim 0,0000000001% increase in the chance of prolonging a human life by one day is more important than preventing 1000000000 animals to be born into torture.
I feel like animal welfare is based on incorrect philosophical arguments. I do not think that animals (sentient) suffer in the same sense that humans (sapient) suffer. I do not believe that any amount of the qualitatively different animal suffering adds up to any amount of human suffering. They are non-commensurate. For more detail, see here: https://www.lesswrong.com/posts/Htu55gzoiYHS6TREB/sentience-matters?commentId=wusCgxN9qK8HzLAiw In accordance with this view, I argue that not a single dollar should be spent on animal suffering while there is still a single human at risk of dying from malnourishment or preventable disease.
Aside from this argument, I believe there is a separate argument from urgency. I think that the immediate concerns of AI outweigh any possible impact from any spending on animal suffering. The continued existence of humanity and the prevention of suffering of sapient digital beings are both at a hinge in history, where the effort put into these causes now matters far more than the non-urgent cause of animal suffering. Animal suffering is increasing only relatively slowly, whereas these other causes may explode in importance in a matter of a few months. No sense in reupholstering the backseat of your car while your car is speeding down a steep mountain road with no one at the wheel...
Does urgency (point 2) apply to global health specifically, given the debate topic of animal welfare vs global health?
Maybe we can consider biorisk, including biorisk from TAI (EDIT: and other ways we might all die, and other GCRs), to fit inside global health, but I don’t think that’s what’s usually intended.
Global health is about the lives of humans and human suffering. It seems to me that AI safety is the #1 global health issue at large in our current world.
But considering that you mean ‘health interventions for poor people’, how do you separate that from AI risk? If you have good reason to believe that if you fail to act then the person will be killed in less than a decade, and so will all animals, all life on Earth… Seems odd to me to be putting ‘treat curable diseases of human population x’ into a different bucket than ‘keep human population x from being murdered’. Aren’t these both health interventions? Don’t they both deliver QUALYs?
I agree you can consider them “health interventions”, but I think what people have in mind by global health in general and in this debate are mostly GiveWell recommendations, and maybe other cause areas in Open Phil’s Global Health and Wellbeing focus areas, which are separate from global catastrophic risks (GCRs). Maybe the line is somewhat artificial.
One reason to separate GCRs from global health is that GCRs and GCR interventions seem very one-shot,[1] poorer evidenced and much more speculative than many global health interventions, like GiveWell recommendations. If you want to be more sure you’re making a difference,[2] GiveWell recommendations seem better for that.
Betting around whether a global catastrophe occurs at all, with highly correlated individual outcomes, not individual deaths separately, e.g. one case of malaria prevented.
Although perhaps a very different difference from what GiveWell estimates, since they don’t account for the possibility that we all get killed by AI, or that the lives we save today go on for hundreds of years due to technological advances.
Well, if AI goes well, things on my short list for what to focus on next with the incredible power unlocked by this unprecedentedly large acceleration in technological development are: alleviating all material poverty, curing all diseases, extending human life, and (as a lower priority) ending cruel factory farming practices. This critical juncture isn’t just about preventing a harm, it’s a fork in the road that goes either to catastrophe or huge wins on every current challenge. Of course, new challenges then arise, such as questions of offense-defense balance in technological advancements, rights of digital beings, government surveillance, etc.
Edit: for additional details on the changes I expect in the world if AI goes well, please see: https://darioamodei.com/machines-of-loving-grace
I think this level of incommensurability is both contradictory with folk ethics(most people I speak with agree that preventing animal torture is more important than preventing mild human headache) and it’s a pretty confident view that assigns a very low weight to the animals’ interests. Do you think our reasoning in moral philosophy and understanding of animal biology is reliable enough to be that confident?
I think you misunderstand my point. I would also say that minor temporary inconvenience is incommensurate with severe crippling harm or death. So I would say that no amount of animal deaths or torture could be commensurate with the life of a human. And I would say that no amount of animal headaches could be commensurate with a human headache.
Here’s a discussion I had with Claude Sonnet 3.5 on the subject:
Nathan:
On the question of animal welfare and folk ethics. I want to explore what you estimate a typical viewpoint for most humans would be on the following ethical dilemma: Suppose a room contains a human child, and a container of shrimp. You must choose to kill either the human child or all the shrimp. Which should you choose?
Claude Sonnet 3.5:
When considering this ethical dilemma from the perspective of what a typical human viewpoint might be, I would estimate that most people would choose to kill the shrimp rather than the human child. Here’s a breakdown of the reasoning that might lead to this common stance:
1. Human life valuation: In most human societies, there’s a strong cultural and emotional emphasis placed on the value of human life, especially that of children. This often stems from our biological instincts as a species, as well as societal norms and moral frameworks.
2. Cognitive capabilities: Humans generally attribute higher moral status to beings with more advanced cognitive capabilities. A human child has greater potential for complex thought, self-awareness, and future planning compared to shrimp.
3. Emotional capacity: Most people would assume that a human child has a much greater capacity for suffering and emotional experiences than shrimp do. This often factors into moral considerations.
4. Familial and social impact: The death of a human child would likely have a significant emotional impact on family members and the broader community, which many would weigh heavily in their decision.
5. Potential and future value: A human child has the potential for a long life span and could contribute to society in various ways. This future potential is often valued highly in ethical considerations.
6. Perception of shrimp: Many people view shrimp (and other invertebrates) as having limited sentience or capacity for suffering. Whether scientifically accurate or not, this perception often influences ethical decisions.
7. Numbers vs. individual value: While there may be many shrimp in the container, most people would likely value the single human life more highly than the collective lives of the shrimp.
It’s important to note that this estimated typical viewpoint doesn’t necessarily align with all philosophical or ethical frameworks. Some ethical systems, particularly those emphasizing animal welfare or the reduction of suffering for all sentient beings, might approach this dilemma differently.
Additionally, individual responses to this dilemma could vary based on cultural background, personal beliefs, or specific ethical frameworks a person adheres to. However, in most human societies, the predominant view would likely favor saving the human child.
Nathan:
What would you argue that most people would support? Which would be considered more legal under international law?
Claude:
Based on my understanding of common ethical intuitions and societal values, I would argue that most people would likely support saving the human child over 10 million shrimp, even though the decision might become more difficult or conflicting for some due to the large number involved.
The reasons for this estimated majority view include:
1. Deeply ingrained human-centric ethics in most cultures
2. The high value placed on human life, especially children’s lives
3. The perceived greater capacity for suffering and complex experiences in humans
4. The social and emotional impact of a child’s death on family and community
5. The future potential and societal contributions of a human child
However, it’s important to note that this would likely be a more contentious and debated decision than the original scenario with fewer shrimp.
Regarding international law, this specific scenario is not directly addressed in any international legal framework. However, we can make some inferences based on existing laws and principles:
1. Human rights: International human rights law, including the Universal Declaration of Human Rights and various UN conventions, places supreme importance on human life and dignity. The right to life for humans is a fundamental principle in international law.
2. Animal welfare: While there are some international agreements on animal welfare, these generally focus on preventing unnecessary cruelty rather than equating animal lives with human lives.
3. Child protection: There are specific international laws and conventions focused on protecting children’s rights and lives, such as the UN Convention on the Rights of the Child.
4. Environmental protection: While there are international laws protecting marine ecosystems, these typically focus on sustainable use and conservation rather than placing animal lives above human lives.
Given these factors, from a legal standpoint, international law would almost certainly prioritize the life of the human child over the lives of the shrimp. The legal systems of most countries, as well as international law, are fundamentally anthropocentric and place human life at the top of the moral hierarchy.
It’s worth noting that legal frameworks often lag behind evolving ethical discussions and may not always align with every individual’s moral views. However, in this case, both common moral intuitions and current legal structures would likely align in prioritizing the human child’s life.
“I do not believe that any amount of the qualitatively different animal suffering adds up to any amount of human suffering.”
I was responding to my interpretation of the sentence above. I agree that it’s a common position to assign infinitely higher importance to saving a human life compared to preventing any amount of animal suffering. My understanding of the quote above was that you made an even stronger claim since the expression is “any amount of human suffering”, which should include very low amounts of human suffering.
But I still think folk ethics on this issue is overconfident and doesn’t take moral uncertainty properly into account. I also think that kind of incommensurability claims face other more general theoretical problems. “Saving” a life is just another expression for extending it, since no intervention makes people immortal. That position would claim 0,0000000001% increase in the chance of prolonging a human life by one day is more important than preventing 1000000000 animals to be born into torture.