Shouldn’t you normalize for brain tissue? So instead of counting individuals, sum the entire mass of the brains of the animals humans breed for consumption.
A “sapience estimate” would also be useful. Smaller brains are likely less capable of sapience and it should be possible estimate the scaling empirically.
How do you handle ecological displacement? Every shrimp farm occupies a chunk of biomass and ocean territory that other animals would have lived in. So you’re substituting billions of shrimp lives lived for billions of displaced other animals and insects.
So if you got an animal rights bill passed that made illegal all forms of animal protein, the fallow areas where the farms were would now have other species living there. These species lives are “nasty brutish and short”, as whatever species occupy where the farm was
will likely have short lifespan and face ocean predators. Maybe the lack of nets to keep them in the farm will make their lives better.
Though given the limited cognitive capacity of shrimp I wonder if they can even be upset by a large enough cage. They find one edge of it, but there are barriers in the natural ocean, and then do they “know” they are trapped when they wander to the other side of the cage they are in and find another impossible barrier? I don’t know the answer but this could be investigated empirically.
Here’s my take on this issue. I think that mistaking pain sensation for moral worth is a big problem, and leading to significant moral costs in our community. I believe that we need to reevaluate what neural structure & function (or other compute substrate) qualifies for moral worth, and to what degree. This should be an empirical study of the physical properties of the system, as well as a philosophical discussion over what attributes matter.
As a starting point, I suggest we use the presence of a social contract between sapient beings capable of verbally communicating preferences. Then should we consider non-verbal communication. In the lack of any sort of abstract communication, I think we should be extremely suspicious of the entity involved having sufficient computational complexity to have moral worth.
Shouldn’t you normalize for brain tissue? So instead of counting individuals, sum the entire mass of the brains of the animals humans breed for consumption.
A “sapience estimate” would also be useful. Smaller brains are likely less capable of sapience and it should be possible estimate the scaling empirically.
How do you handle ecological displacement? Every shrimp farm occupies a chunk of biomass and ocean territory that other animals would have lived in. So you’re substituting billions of shrimp lives lived for billions of displaced other animals and insects.
So if you got an animal rights bill passed that made illegal all forms of animal protein, the fallow areas where the farms were would now have other species living there. These species lives are “nasty brutish and short”, as whatever species occupy where the farm was will likely have short lifespan and face ocean predators. Maybe the lack of nets to keep them in the farm will make their lives better.
Though given the limited cognitive capacity of shrimp I wonder if they can even be upset by a large enough cage. They find one edge of it, but there are barriers in the natural ocean, and then do they “know” they are trapped when they wander to the other side of the cage they are in and find another impossible barrier? I don’t know the answer but this could be investigated empirically.
Here’s my take on this issue. I think that mistaking pain sensation for moral worth is a big problem, and leading to significant moral costs in our community. I believe that we need to reevaluate what neural structure & function (or other compute substrate) qualifies for moral worth, and to what degree. This should be an empirical study of the physical properties of the system, as well as a philosophical discussion over what attributes matter.
As a starting point, I suggest we use the presence of a social contract between sapient beings capable of verbally communicating preferences. Then should we consider non-verbal communication. In the lack of any sort of abstract communication, I think we should be extremely suspicious of the entity involved having sufficient computational complexity to have moral worth.