Against Suffering-Minimization

Against Suffering-Minimization as Terminal Goal: A Case for Value-Depth Weighting

Summary

I argue that negative utilitarianism and neuron-counting approaches to moral weight contain a critical flaw: they optimize for a metric (absence of suffering) that, taken to its logical conclusion, is best satisfied by eliminating the conditions that make anything valuable at all. I propose an alternative framing in which suffering functions as a signal that value is at stake, and the capacity to suffer is inseparable from the capacity for the things we actually care about. This has significant implications for cause prioritization.

The Argument in Brief

  1. Negative utilitarianism, consistently applied, recommends eliminating all beings capable of suffering, or at minimum reducing consciousness until nothing registers as loss.

  2. This conclusion strikes most people as monstrous, which suggests our actual values aren’t captured by suffering-minimization.

  3. What we actually seem to value involves complexity, continuity, self-reflection, narrative, and what I’ll gesture at as “depth of experience.”

  4. The capacity to suffer and the capacity for depth are not independent properties. They’re the same underlying capacity. A being that cannot lose anything cannot genuinely have anything.

  5. Therefore, we should optimize not for minimizing suffering but for realizing value, which includes navigating away from suffering but never at the cost of flattening the value landscape itself.

Why This Matters for Cause Prioritization

The EA community has increasingly focused on interventions targeting large numbers of simple creatures. Shrimp welfare is a prominent example. The reasoning: these animals likely have some capacity for suffering, they exist in enormous numbers, and interventions may be cost-effective in terms of “suffering reduced per dollar.”

This reasoning depends on an implicit assumption: that a nociceptive signal in a shrimp nervous system is the same kind of thing as suffering in a human being, differing perhaps in intensity but not in kind. Aggregate enough shrimp-pain and it outweighs human concerns.

I want to suggest this is a category error.

The Disanalogy Between Nociception and Suffering

Consider what happens when you stub your toe:

  • There’s a nociceptive signal (neurons firing, communicating tissue damage)

  • There’s a phenomenal experience (it hurts)

  • There’s a self-reflective recognition (“I am in pain”)

  • There’s a narrative context (“this is interrupting my day,” “I was careless”)

  • There’s temporal extension (“this will fade,” “I’ve felt worse”)

A shrimp, when exposed to noxious stimuli, plausibly has something like the first item. Whether it has the second is contested. It almost certainly lacks the third through fifth.

Why should we think these additional layers are morally relevant? Because they’re what make suffering matter to the sufferer. Pain that exists in a narrative, that is recognized as pain, that threatens projects and relationships and self-continuity: this is a different phenomenon from a damage signal in a simple reflex arc.

The negative utilitarian might respond: “But the raw feel is what’s bad, not the cognitive elaboration.”

I’d counter: What evidence do you have that there is a “raw feel” independent of cognitive complexity? The more we learn about consciousness, the more it appears that phenomenal experience is bound up with integration, self-modeling, and complexity. There may be no “pure suffering” uncontaminated by the cognitive apparatus that knows it’s suffering.

The Reframe: Suffering as Signal, Not Terminal Bad

Here’s an alternative model:

Suffering is what it feels like from the inside when something valuable is at risk or being lost. Rather than an independent bad floating free in the universe, suffering is the shadow cast by value.

This explains our intuitions better than negative utilitarianism:

  • Why is the death of a person with rich relationships and future plans worse than the death of someone in a permanent vegetative state? Not because of pain differential, but because more value is at stake.

  • Why is torturing someone for years worse than a brief intense pain? Not just because of summed hedons, but because of what it does to a self extended through time.

  • Why do we think wireheading (direct stimulation of pleasure centers) isn’t the solution to all problems? Because we don’t actually value felt-pleasure as terminal. We value it as a signal that things are going well.

Implications

If this framing is right:

On cause prioritization: We should weight interventions not by neuron-count or presumed-suffering-intensity alone, but by the depth of value at stake. This likely redirects attention toward:

  • Existential risk (because extinction forecloses all future value, not primarily because it causes suffering)

  • Enabling conditions for continued growth in complexity and depth

  • Avoiding lock-in to suboptimal states

On animal welfare: Animal suffering still matters. But it matters in proportion to the depth of the system doing the suffering. A chimpanzee with social bonds, memory, and future-orientation has more at stake than a shrimp with a nociceptive reflex. This isn’t speciesism. It’s a recognition that morally relevant properties come in degrees.

On the long-term future: The highest-value outcomes aren’t those with the least suffering but those with the greatest realized depth. This might include forms of being far beyond current human experience: minds that can hold more, integrate more, experience more. The project is generative, not preventative.

Objections and Responses

“This is just motivated reasoning to avoid caring about animals.”

I’m not arguing animals don’t matter. I’m arguing that the degree to which they matter scales with the complexity of their experience, not with neuron count or nociceptive capacity alone. I’m happy to accept significant obligations toward animals with rich inner lives. I’m skeptical that a shrimp welfare intervention outweighs existential risk reduction.

“You’re privileging properties humans happen to have.”

I’m privileging properties that seem to be what make experience deep, and yes, humans have more of these than shrimp. But the argument doesn’t depend on human specialness. A future AI with vastly more integrative capacity than humans would matter more than humans on this view. The metric is something like depth, complexity, and capacity for value, not human-likeness.

“How do you quantify depth?”

I can’t, precisely. But inability to quantify isn’t a reason to default to a metric (neuron-count) that we can quantify but that doesn’t track what we actually care about. Rigor in measuring the wrong thing produces worse decisions than appropriate uncertainty about the right thing.

“This could justify ignoring huge amounts of suffering.”

It could justify deprioritizing interventions that reduce simple nociceptive signals in favor of interventions that protect or enable depth. I think this is correct. I don’t think it justifies cruelty. Wanton harm to simple creatures is still wrong, partly because of what it does to the agent, partly because we’re uncertain about their experience. But it does suggest that the shrimp-welfare dollar might be better spent on AI safety, and importantly, AI welfare research such as being done by https://​​eleosai.org/​​ .

Conclusion

The EA community has been admirably willing to follow arguments to uncomfortable places. I’m suggesting that consistent application of our actual values leads somewhere other than where the neuron-counters have been going. Those values look less like hedonic utilitarianism and more like what I’d call “depth realization.”

Suffering matters because value matters. Destroying the capacity for suffering by destroying the capacity for value isn’t victory. It’s the ultimate defeat.

The question before us is how to build toward the genuinely good.