Here are my reasons for the belief wild animal/small minds/… suffering agenda is based mostly on errors and uncertainties. Some of the uncertainties should warrant research effort, but I do not believe the current state of knowledge justifies prioritization ofany kind of advocacy or value spreading.
1] The endeavour seems to be based on extrapolating intuitive models far outside the scope for which we have data. The whole suffering calculus is based on extrapolating the concept of suffering far away from the domain for which we have data from human experience.
2] Big part of it seems arbitrary. When expanding the moral circle toward small computational processes and simple systems, why not expand it toward large computational processes and complex systems? E.g. we can think about the DNA based evolution as about large computational/optimization process—suddenly “wild animal suffering” has a purpose and traditional environmnet and biodiversity protection efforts make sense.
(Similarly we could argue much “human utility” is in the larger system structure above individual humans)
3] We do not know how to measure and aggregate utility of mind states. Like, really don’t know. E.g. it sems to me completely plausible the utility of 10 people reaching some highly joyful mindstates is the dominanat contribution over all human and animal minds.
4] Part of the reasoning usually seems contradictory. If the human cognitive processes are in the priviledged position of creating meaning in this universe … well, then they are in the priviledged postion, and there _is_ a categorical difference between humans and other minds. If they are not in the priviledged positions, how it comes humans should impose their ideas about meaning on other agents?
5] MCE efforts directed toward AI researchers with the intent of influencing values of some powerful AI may increase x-risk. E.g. if the AI is not “speciist” and gives the same weight to satysfing prefrences of all humans and all chicken, the chicken would outnumber humans.
You raise some good points. (The following reply doesn’t necessarily reflect Jacy’s views.)
I think the answers to a lot of these issues are somewhat arbitrary matters of moral intuition. (As you said, “Big part of it seems arbitrary.”) However, in a sense, this makes MCE more important rather than less, because it means expanded moral circles are not an inevitable result of better understanding consciousness/etc. For example, Yudkowsky’s stance on consciousness is a reasonable one that is not based on a mistaken understanding of present-day neuroscience (as far as I know), yet some feel that Yudkowsky’s view about moral patienthood isn’t wide enough for their moral tastes.
Another possible reply (that would sound better in a political speech than the previous reply) could be that MCE aims to spark discussion about these hard questions of what kinds of minds matter, without claiming to have all the answers. I personally maintain significant moral uncertainty regarding how much I care about what kinds of minds, and I’m happy to learn about other people’s moral intuitions on these things because my own intuitions aren’t settled.
E.g. we can think about the DNA based evolution as about large computational/optimization process—suddenly “wild animal suffering” has a purpose and traditional environmnet and biodiversity protection efforts make sense.
Or if we take a suffering-focused approach to these large systems, then this could provide a further argument against environmentalism. :)
If the human cognitive processes are in the priviledged position of creating meaning in this universe … well, then they are in the priviledged postion, and there is a categorical difference between humans and other minds.
I selfishly consider my moral viewpoint to be “privileged” (in the sense that I prefer it to other people’s moral viewpoints), but this viewpoint can have in its content the desire to give substantial moral weight to non-human (and human-but-not-me) minds.
Thanks for writing it.
Here are my reasons for the belief wild animal/small minds/… suffering agenda is based mostly on errors and uncertainties. Some of the uncertainties should warrant research effort, but I do not believe the current state of knowledge justifies prioritization ofany kind of advocacy or value spreading.
1] The endeavour seems to be based on extrapolating intuitive models far outside the scope for which we have data. The whole suffering calculus is based on extrapolating the concept of suffering far away from the domain for which we have data from human experience.
2] Big part of it seems arbitrary. When expanding the moral circle toward small computational processes and simple systems, why not expand it toward large computational processes and complex systems? E.g. we can think about the DNA based evolution as about large computational/optimization process—suddenly “wild animal suffering” has a purpose and traditional environmnet and biodiversity protection efforts make sense.
(Similarly we could argue much “human utility” is in the larger system structure above individual humans)
3] We do not know how to measure and aggregate utility of mind states. Like, really don’t know. E.g. it sems to me completely plausible the utility of 10 people reaching some highly joyful mindstates is the dominanat contribution over all human and animal minds.
4] Part of the reasoning usually seems contradictory. If the human cognitive processes are in the priviledged position of creating meaning in this universe … well, then they are in the priviledged postion, and there _is_ a categorical difference between humans and other minds. If they are not in the priviledged positions, how it comes humans should impose their ideas about meaning on other agents?
5] MCE efforts directed toward AI researchers with the intent of influencing values of some powerful AI may increase x-risk. E.g. if the AI is not “speciist” and gives the same weight to satysfing prefrences of all humans and all chicken, the chicken would outnumber humans.
You raise some good points. (The following reply doesn’t necessarily reflect Jacy’s views.)
I think the answers to a lot of these issues are somewhat arbitrary matters of moral intuition. (As you said, “Big part of it seems arbitrary.”) However, in a sense, this makes MCE more important rather than less, because it means expanded moral circles are not an inevitable result of better understanding consciousness/etc. For example, Yudkowsky’s stance on consciousness is a reasonable one that is not based on a mistaken understanding of present-day neuroscience (as far as I know), yet some feel that Yudkowsky’s view about moral patienthood isn’t wide enough for their moral tastes.
Another possible reply (that would sound better in a political speech than the previous reply) could be that MCE aims to spark discussion about these hard questions of what kinds of minds matter, without claiming to have all the answers. I personally maintain significant moral uncertainty regarding how much I care about what kinds of minds, and I’m happy to learn about other people’s moral intuitions on these things because my own intuitions aren’t settled.
Or if we take a suffering-focused approach to these large systems, then this could provide a further argument against environmentalism. :)
I selfishly consider my moral viewpoint to be “privileged” (in the sense that I prefer it to other people’s moral viewpoints), but this viewpoint can have in its content the desire to give substantial moral weight to non-human (and human-but-not-me) minds.