Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.
I think this post is mixing two different claims.
Critiquing “minimize suffering as the only terminal value → extinction is optimal” makes sense.
But that doesn’t automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.
You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.
Also I appreciated the discussion of depth, but have nothing to say about it here.
I would appreciate: - Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn’t necessarily recommend extinction. - The OP clarifying the post by making more explicit the claims.
Yes, you can reject NU while still thinking shrimp welfare matters at the margin. The question is how much it matters relative to alternatives. My argument is that standard EA reasoning on this often smuggles in assumptions about moral weight (neuron count, nociceptive capacity) that don’t track what we actually care about.
If you accept the depth-weighting framework in sections 3-5, then even a pluralist who includes suffering-reduction as one value among many should weight interventions differently than the neuron-counters suggest. The shrimp intervention might still have positive value—I’m not arguing it’s worthless—but the cost-effectiveness comparison to, say, x-risk work shifts significantly.
So the steel-manned version of my claim: “Given limited resources, the depth-weighting framework implies shrimp welfare is probably not among the highest-impact interventions, even granting uncertainty about shrimp experience.” That’s weaker than “shrimp don’t matter” and doesn’t depend on NU being false.
I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I haven’t seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.
My main critique to this post is that there are different claims and it’s not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.
Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.
I think this post is mixing two different claims.
Critiquing “minimize suffering as the only terminal value → extinction is optimal” makes sense.
But that doesn’t automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.
You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.
Also I appreciated the discussion of depth, but have nothing to say about it here.
I would appreciate:
- Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn’t necessarily recommend extinction.
- The OP clarifying the post by making more explicit the claims.
See also this companion piece: https://forum.effectivealtruism.org/posts/5Nv3xK9myFzN9aqfE/the-ceiling-is-nowhere-near
Yes, you can reject NU while still thinking shrimp welfare matters at the margin. The question is how much it matters relative to alternatives. My argument is that standard EA reasoning on this often smuggles in assumptions about moral weight (neuron count, nociceptive capacity) that don’t track what we actually care about.
If you accept the depth-weighting framework in sections 3-5, then even a pluralist who includes suffering-reduction as one value among many should weight interventions differently than the neuron-counters suggest. The shrimp intervention might still have positive value—I’m not arguing it’s worthless—but the cost-effectiveness comparison to, say, x-risk work shifts significantly.
So the steel-manned version of my claim: “Given limited resources, the depth-weighting framework implies shrimp welfare is probably not among the highest-impact interventions, even granting uncertainty about shrimp experience.” That’s weaker than “shrimp don’t matter” and doesn’t depend on NU being false.
I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I haven’t seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.
My main critique to this post is that there are different claims and it’s not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.
Nice comment! I answered one of your requests as a top-level comment.