On building Omelas for shrimp; the implications of diversity-oriented theories of moral value on factory farming

Identity

In theory of mind, the question of how to define an “individual” is complicated. If you’re not familiar with this area of philosophy, see Wait But Why’s introduction.

I think most people in EA circles subscribe to the computational theory of mind, which means that any computing device is able to instantiate a sentient being. (In the simplest case, by simply simulating a physical brain in sufficient detail.)

Computationalism does not, on its own, solve the identity problem. If two computers are running the exact same simulation of a person, is destroying one of them equivalent to killing a person, even though there’s a backup? What about just turning it off, capable of being turned on later? These are moral questions, not factual ones, and intuitions differ.

Treating each simulation as its own separate moral patient runs into problems once the substrate is taken into account. Consider a 2-dimensional water computer that’s instantiating a person, then slice the computer in half lengthwise, separating it into two separate sets of containers for the water. Does this create a second person, despite not changing the computation or even adding any water to the system? If two digital computers running the same computation can be two different people, then two water computers also must be. But this implies that it would be unethical to slice the computer in half and then pour out the water from one half of it, but it would not be unethical to pour out half the water from the original without any slicing, which doesn’t make a lot of sense.

Some computationalists resolve this by saying that identity is the uniqueness of computation, and multiple identical simulations are morally equivalent to just one. But how is unique computation defined exactly? If one simulation adds A+B, storing the result in the register that originally held A, and another simulation does B+A, does that implementation difference alone make them entirely different people? Seems odd.

The natural resolution to these problems is to treat uniqueness as a spectrum; killing a sentient simulation is unethical in proportion to the amount it differs from the most similar other simulation running at the time.

Common-sense morality

Interestingly, we see ideologies reminiscent of this uniqueness-of-mind approach arise elsewhere too.

In mainstream environmentalism, a hawk killing a sparrow is not seen as a bad thing; it’s just the natural order of things, and perhaps even interfering with it would be unethical. But hawks hunting all sparrows to extinction would be seen as a tragedy, and worthy of intervention.

That is, most people don’t care very much about preserving individual animals, but they do care about preserving types of animal.

I don’t think this is the same underlying philosophy as the computational one I described, since mainstream environmentalism cares more about how a species looks than about its mental activity. (A rare variant of flower that is a different color but is otherwise identical to the normal variant would be worth saving under normal environmentalism, but not really under computationalism.) But it’s similar.

And the same sort of intuitions tend to persist when thought about more rigorously. Hedonistic utilitarians who want to tile the universe with hedonium are the exception; most people intrinsically value diversity of experience, and see a large number of very similar lives as less of a good thing.

Shrimp

The shrimp brain has around 100,000 neurons, allowing for 2^100,000 distinct brain states if we ignore synapses and treat neurons as binary. That’s a lot, but it seems unlikely that any significant fraction of those brain states are actually attainable through natural processes, and most of the ones that are reachable will be subjectively extremely similar to each other.

(Humans have about 86 billion neurons, but there obviously aren’t 2^86 billion meaningfully different experiences a human could have.)

Shrimp welfare has focused on the absolute number of shrimp that are killed every year; about 500 billion in farms, and 25 trillion in the wild. The logic being that even if shrimp only carry 0.1% of the moral worth of a bird or mammal, there are so many of them that it’s still worth focusing on shrimp interventions.

But under diversity-valuing ethical theories, if we take a reasonable estimate of 10,000 meaningfully distinct shrimp minds at birth times 1 million possible external environmental inputs to those minds, that’s only 10 billion distinct shrimp lived experiences. Most of those lives are simply duplicated a massive number of times, rendering all the duplicates morally irrelevant.

Practical consequences

This has significant impact on the effectiveness of welfare interventions. The existence of a finite number of distinct shrimp lives imposes a ceiling on the total moral value of the species, and means that simply multiplying by the number of physical shrimp bodies is invalid.

In particular, an intervention that improves quality of life in 10% of shrimp farms is not worth 10% as much as the same intervention applied to all farms; it’s worth about 0, since ~all the negative utility shrimp lives that are averted in the affected farms are still instantiated in other farms.

It would therefore be better to pursue interventions that affect all farms worldwide, even if the magnitude of the improvement is much less than could be achieved by focusing on a specific farm. Global improvements may be able to actually eliminate every single shrimp body instantiating a particularly unpleasant life, whereas local interventions cannot. (Or interventions that focus on all farms within a subset that use particularly cruel methods.)

This also implies that wild shrimp welfare improvements are proportionately more impactful than those that focus on farmed shrimp. Farmed shrimp live extremely similar lives; it wouldn’t surprise me if only a few million distinct experiences are possible in a farm, meaning that less than 0.01% of farmed shrimp are morally relevant. Wild shrimp live in a much more diverse environment, and probably have a larger percentage of individuals living distinct lives.

The diversity theory of moral value also opens up an entirely new avenue of welfare intervention: standardization. If shrimp farms can be made more homogenous, the number of distinct lives experienced by shrimp in those farms will decrease. If the number of distinct lives being lived decreases sufficiently, this could be a net moral positive even if the average life becomes worse.

In the ideal case, trillions of shrimp could be farmed in atomically-identical environments; enough to feed the world while yielding only the negative moral impact of torturing a single shrimp.

Further research

There are two main lines of investigation needed in order to be confident in these prescriptions, one philosophical and one empirical.

Firstly, we must work out whether computational theories of mind and diversity theories of identity are the values systems we actually want to follow. Torturing two identical human bodies does intuitively seem worse than torturing just one, so perhaps this is not the path humanity wants to go down. It will be difficult to square this intuition with the slicing problem, but perhaps it is doable. The Stanford Encyclopedia of Philosophy also contains some other objections to computational theories of mind.

I also glossed over some relevant details of the theory. In particular, temporal considerations. If simulating two bad lives at the same time is not worse than simulating one, then presumably simulating them in sequence is not worse than simulating them simultaneously. This would mean that there is no value in preventing bad experiences that have already been had in the past. Since trillions of shrimp have already been tortured and killed, further repetitions of identical lives are irrelevant, and our focus should be on preventing new types of bad lives from being brought into existence. In practice this probably translates into trying to prevent innovation in the shrimp farming sector, keeping everyone using the same technologies they’ve used in the past. But again, the idea that torturing a person for millions of years is perfectly ethical as long as they had already experienced the same torture in the past would perhaps raise a few eyebrows.

Secondly, we need to pin down the actual number of different mental experiences involved. My estimates above were complete guesses. If the actual number of distinct shrimp lives is just a few orders of magnitude higher, then the discussed ceiling effects become irrelevant, and standard interventions are still most effective. And if the true number is much lower, then we need to look into whether these ceiling effects may apply to chickens and other factory farmed animals as well.

There are potentially experiments to this effect that can be performed with existing technology. Behavior is a reasonable proxy for mental experiences, since the evolutionary purpose of mental experience is to produce behavior, so measuring the number of distinct shrimp behaviors in response to identical stimuli should allow us to estimate the number of different brain states among those shrimp without needing to intimately understand what those brain states are.

Chaos theory makes this challenging, as, for example, the different eddies of water that form as the shrimp swims could impact its behavior. But with a rigorous enough protocol and large enough sample size, it seems feasible to get meaningful results from something like this.