I think your scenario is plausible in principle: once an alarm is “loud enough,” further increases in intensity could be selectively neutral, so unnecessarily loud alarms might persist by drift, much like neutral variants in molecular evolution.
My hesitation is about how often extreme felt intensity actually falls into that neutral regime. For neutrality to hold, extra intensity must add no benefit and impose no additional costs or constraints. If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
Oh yes, I agree with all that. Just to make sure I understand what you think of my original point:
If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
To be clear, extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right? An unnecessarily louder alarm for some problem hurts only if it impedes your capacity to solve other important problems.
My original suggestion was that, while an unnecessarily loud alarm would generally be maladaptive because of the above (as your post suggests),[1]it might not in genuinely catastrophic situations, specifically. Because the importance of solving the problem signaled by the alarm overwhelmingly dominates. It matters little whether it impedes your capacity to solve other problems.
I get the impression that you agree with this as presently stated (at least depending on one’s interpretation of “overwhelmingly” and “little”) and were simply making sure I wasn’t taking away too much from my point. Is that correct? Or do you in fact see reasons to disagree with the above?[2]
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
Yes, absolutely. I did not mean to question any of that. I’m just curious about this potential specific deviation from the default in high-stakes situations.[3]
One counter-argument I see is that, while the unecessarily loud alarm doesn’t hurt in genuinely catastrophic situations, there always are risks of false alarm. If the loud alarm misfires, there is no overwhelmingly important problem that largely dominates the cost of reducing your capacity to solve others. This is just purely bad for fitness. I don’t know how significant this misfiring risk is, though, and hence how much weight to put on this counter-argument.
On the cost point you raised — “extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right?” — selection indeed acts on net fitness. Still, it’s both useful and standard to keep costs and benefits analytically separate before recombining them. A trait can be costly in terms of resources or architecture even when it increases fitness overall; brains and immune systems are classic examples.
On your footnote #3 — “the question of whether organisms with narrower welfare ranges could feel extreme pain” — I think there may be a bit of a contradiction in terms. If an organism has a genuinely narrower welfare range, then by definition (or at least under the operational definitions I’m using), it does not reach disabling or excruciating levels of negative affect. In that framing, the relevant question is precisely where the negative-intensity ceiling lies.
On the cost point — Right, the words I chose made it very unclear whether and when I was talking about only costs, or only benefits, or overall fitness once we combine both, sorry.
On my contradiction — Oops yeah, I meant organisms with lower resolution. My bad.
Thanks for taking the time to reply to all this. Very helpful!
Thanks, Jim — that does get to the crux.
I think your scenario is plausible in principle: once an alarm is “loud enough,” further increases in intensity could be selectively neutral, so unnecessarily loud alarms might persist by drift, much like neutral variants in molecular evolution.
My hesitation is about how often extreme felt intensity actually falls into that neutral regime. For neutrality to hold, extra intensity must add no benefit and impose no additional costs or constraints. If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
Oh yes, I agree with all that. Just to make sure I understand what you think of my original point:
To be clear, extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right? An unnecessarily louder alarm for some problem hurts only if it impedes your capacity to solve other important problems.
My original suggestion was that, while an unnecessarily loud alarm would generally be maladaptive because of the above (as your post suggests),[1] it might not in genuinely catastrophic situations, specifically. Because the importance of solving the problem signaled by the alarm overwhelmingly dominates. It matters little whether it impedes your capacity to solve other problems.
I get the impression that you agree with this as presently stated (at least depending on one’s interpretation of “overwhelmingly” and “little”) and were simply making sure I wasn’t taking away too much from my point. Is that correct? Or do you in fact see reasons to disagree with the above?[2]
Yes, absolutely. I did not mean to question any of that. I’m just curious about this potential specific deviation from the default in high-stakes situations.[3]
So, yes, I’m not claiming that “extreme felt intensity actually [often] falls into that neutral regime”, in that sense.
One counter-argument I see is that, while the unecessarily loud alarm doesn’t hurt in genuinely catastrophic situations, there always are risks of false alarm. If the loud alarm misfires, there is no overwhelmingly important problem that largely dominates the cost of reducing your capacity to solve others. This is just purely bad for fitness. I don’t know how significant this misfiring risk is, though, and hence how much weight to put on this counter-argument.
I’m curious because it seems highly relevant to, e.g., the question of whether organisms with
narrower welfare ranges(EDIT: lower resolution) could feel extreme pain, and how reliably we can estimate the probability of this, which in turn matters for how precise our moral weight estimates can be (see #3 in this informal research agenda).Thanks Jim
On the cost point you raised — “extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right?” — selection indeed acts on net fitness. Still, it’s both useful and standard to keep costs and benefits analytically separate before recombining them. A trait can be costly in terms of resources or architecture even when it increases fitness overall; brains and immune systems are classic examples.
On your footnote #3 — “the question of whether organisms with narrower welfare ranges could feel extreme pain” — I think there may be a bit of a contradiction in terms. If an organism has a genuinely narrower welfare range, then by definition (or at least under the operational definitions I’m using), it does not reach disabling or excruciating levels of negative affect. In that framing, the relevant question is precisely where the negative-intensity ceiling lies.
On the cost point — Right, the words I chose made it very unclear whether and when I was talking about only costs, or only benefits, or overall fitness once we combine both, sorry.
On my contradiction — Oops yeah, I meant organisms with lower resolution. My bad.
Thanks for taking the time to reply to all this. Very helpful!