Update (April 2026)
Since writing this post, our thinking on this work has evolved in a few important ways.
In particular, what we originally referred to as the Pain Atlas Project is now better understood as part of a broader effort: the development of a Welfare Footprint Atlas. The goal is to enable the generation of structured, comparable, and inspectable estimates of animal welfare across products, production systems, and species.
The key shift is not simply the use of AI, but the ability to produce first-pass estimates at scale, making it easier to identify where suffering is concentrated and where interventions may have the greatest impact — while keeping assumptions explicit and open to revision.
We’ve updated the main text of the post to reflect this evolution, and we’re continuing to develop the underlying tools and methods that make this possible.
Thanks, Jim! I think Dolores/Mildred is actually a very helpful toy example.
If both Dolores and Mildred already cross the threshold needed to lexically prioritize starvation, then Dolores’s stronger feeling may indeed add no extra functional value. That’s very close to the point we tried to make in the revised Figure 1 of Do primitive sentient organisms feel extreme pain?: two systems can differ in range while still being equally effective for prioritization, provided the relevant discriminative structure is the same.
So yes: in your setup, the extra intensity could be selectively neutral if it also carries no extra costs or constraints. That’s where the framework still bites. Our point isn’t that higher intensity must always be selected against, but that it shouldn’t automatically be treated as either necessary or neutral.
On your second point, I also agree that we should be careful: I would not want to claim, as a general rule, that “more intense” always means “more expensive.” The more modest claim is that some ways of implementing very high intensity may require broader integration, stronger modulation, or greater whole-system involvement, and if they do not, then your neutrality story remains a live possibility (even if it is not the default expectation in the framework).
This is also part of why we’ve been trying to make these distinctions more explicit in the more recent Interspecific Affect GPT post: not because it settles cases like this, but because it forces us to state more explicitly where the ceiling question depends on extra commitments, where neutrality remains a serious live possibility, and how this kind of reasoning can be pushed toward real taxa—something that matters directly for interspecific comparisons of the capacity to feel pain.