Thanks, Vasco — I see where the confusion is coming from.
The difficulty is that in our framework, resolution is not defined as a mathematical density of bins over a fixed, external intensity axis (e.g., dN/dI). That framing assumes there is already a continuous welfare scale “out there,” and resolution simply tells us how finely the organism partitions it.
In our usage, resolution refers to the organism’s internal discriminative granularity — how finely differences in affective magnitude can be distinguished and behaviorally prioritized within whatever range the organism has.
So resolution is not the derivative of category-count with respect to an external intensity variable. Rather, it is a property of the encoding architecture itself.
That’s why it is orthogonal to range. A system may:
• Have a narrow range but very fine discriminative structure within that range.
• Have a wide range but coarse internal discrimination.
• Increase both independently.
Your car analogy is actually helpful. A car can move very fast over a short distance — speed is not determined by total range. Likewise, high resolution does not require a wide affective range, and vice versa.
So resolution is better understood as internal discriminative power, not as bin density over a pre-specified global welfare scale.
In our upcoming post, we introduce human-anchored reference categories (Annoying(h), Hurtful(h), Disabling(h), Excruciating(h)) to provide a pragmatic shared coordinate system for cross-species discussion. So if one wants to talk about “acuity/resolution between A and B,” it’s reasonable to treat A and B as positions (or intervals) on that human-anchored scale.
But no — we’re not defining acuity as #levels/(B−A), because that requires meaningful distances between A and B. At this stage the (h) scale is best treated as ordinal: it supports “higher/lower ceiling” comparisons, not subtraction or ratios.