Thanks a lot for the kind words, Jim — and for the thoughtful pushback.
I think your point holds if we assume that the only way to implement a very strong alarm is via extreme felt intensity — but that assumption is exactly what we’re questioning.
I agree that in genuinely catastrophic situations, evolution should tolerate very “loud” alarms. The open question, though, is whether those alarms need to be implemented as extreme affective states, rather than through non-affective or lower-intensity control mechanisms.
On the benefit side, there seem to be two distinct roles a very strong signal could play. First, triggering an immediate reaction in life-or-death situations. But this doesn’t require affect at all: many organisms (including very simple ones) already show robust threat responses via non-felt control. Even in sentient organisms, immediate escape could in principle be driven by low-intensity affect if thresholds are set low enough, especially where behavioral options are limited.
On the cost side, generating and sustaining very high-intensity affective states may plausibly require substantial architectural capacity of the kind we discuss above. In systems with limited computational or neural resources—and especially in organisms with few available behavioral options—extreme felt states may therefore be difficult or unnecessary to implement, regardless of how valuable a very loud alarm would be.
I agree that in genuinely catastrophic situations, evolution should tolerate very “loud” alarms. The open question, though, is whether those alarms need to be implemented as extreme affective states, rather than through non-affective or lower-intensity control mechanisms.
I was assuming they do not need to be, but might appear and remain anyway if they have no significant downside, like, e.g., humans’ protruding chins. How loud the alarm is beyond the “loud enough” point would then just be a matter of luck.[1] Both just-loud-enough alarms and unnecessarily loud ones would be about equally effective so equally likely, all else equal. How plausible do you think this is?
Sorry for not being clear in my first comment, and thanks for helping pin down the crux!
I think your scenario is plausible in principle: once an alarm is “loud enough,” further increases in intensity could be selectively neutral, so unnecessarily loud alarms might persist by drift, much like neutral variants in molecular evolution.
My hesitation is about how often extreme felt intensity actually falls into that neutral regime. For neutrality to hold, extra intensity must add no benefit and impose no additional costs or constraints. If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
Oh yes, I agree with all that. Just to make sure I understand what you think of my original point:
If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
To be clear, extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right? An unnecessarily louder alarm for some problem hurts only if it impedes your capacity to solve other important problems.
My original suggestion was that, while an unnecessarily loud alarm would generally be maladaptive because of the above (as your post suggests),[1]it might not in genuinely catastrophic situations, specifically. Because the importance of solving the problem signaled by the alarm overwhelmingly dominates. It matters little whether it impedes your capacity to solve other problems.
I get the impression that you agree with this as presently stated (at least depending on one’s interpretation of “overwhelmingly” and “little”) and were simply making sure I wasn’t taking away too much from my point. Is that correct? Or do you in fact see reasons to disagree with the above?[2]
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
Yes, absolutely. I did not mean to question any of that. I’m just curious about this potential specific deviation from the default in high-stakes situations.[3]
One counter-argument I see is that, while the unecessarily loud alarm doesn’t hurt in genuinely catastrophic situations, there always are risks of false alarm. If the loud alarm misfires, there is no overwhelmingly important problem that largely dominates the cost of reducing your capacity to solve others. This is just purely bad for fitness. I don’t know how significant this misfiring risk is, though, and hence how much weight to put on this counter-argument.
On the cost point you raised — “extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right?” — selection indeed acts on net fitness. Still, it’s both useful and standard to keep costs and benefits analytically separate before recombining them. A trait can be costly in terms of resources or architecture even when it increases fitness overall; brains and immune systems are classic examples.
On your footnote #3 — “the question of whether organisms with narrower welfare ranges could feel extreme pain” — I think there may be a bit of a contradiction in terms. If an organism has a genuinely narrower welfare range, then by definition (or at least under the operational definitions I’m using), it does not reach disabling or excruciating levels of negative affect. In that framing, the relevant question is precisely where the negative-intensity ceiling lies.
On the cost point — Right, the words I chose made it very unclear whether and when I was talking about only costs, or only benefits, or overall fitness once we combine both, sorry.
On my contradiction — Oops yeah, I meant organisms with lower resolution. My bad.
Thanks for taking the time to reply to all this. Very helpful!
Thanks a lot for the kind words, Jim — and for the thoughtful pushback.
I think your point holds if we assume that the only way to implement a very strong alarm is via extreme felt intensity — but that assumption is exactly what we’re questioning.
I agree that in genuinely catastrophic situations, evolution should tolerate very “loud” alarms. The open question, though, is whether those alarms need to be implemented as extreme affective states, rather than through non-affective or lower-intensity control mechanisms.
On the benefit side, there seem to be two distinct roles a very strong signal could play. First, triggering an immediate reaction in life-or-death situations. But this doesn’t require affect at all: many organisms (including very simple ones) already show robust threat responses via non-felt control. Even in sentient organisms, immediate escape could in principle be driven by low-intensity affect if thresholds are set low enough, especially where behavioral options are limited.
Second, overriding other ongoing motivations in organisms with richer behavioral repertoires. Here, stronger affective signals become more plausibly useful, as they can reliably dominate competing drives (foraging, mating, self-maintenance, etc.). One way to achieve this is by expanding affective range rather than relying only on finer discrimination within a narrow range.
On the cost side, generating and sustaining very high-intensity affective states may plausibly require substantial architectural capacity of the kind we discuss above. In systems with limited computational or neural resources—and especially in organisms with few available behavioral options—extreme felt states may therefore be difficult or unnecessary to implement, regardless of how valuable a very loud alarm would be.
I was assuming they do not need to be, but might appear and remain anyway if they have no significant downside, like, e.g., humans’ protruding chins. How loud the alarm is beyond the “loud enough” point would then just be a matter of luck.[1] Both just-loud-enough alarms and unnecessarily loud ones would be about equally effective so equally likely, all else equal. How plausible do you think this is?
Sorry for not being clear in my first comment, and thanks for helping pin down the crux!
There apparently may be something similar and well-known happening in molecular evolution.
Thanks, Jim — that does get to the crux.
I think your scenario is plausible in principle: once an alarm is “loud enough,” further increases in intensity could be selectively neutral, so unnecessarily loud alarms might persist by drift, much like neutral variants in molecular evolution.
My hesitation is about how often extreme felt intensity actually falls into that neutral regime. For neutrality to hold, extra intensity must add no benefit and impose no additional costs or constraints. If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
Oh yes, I agree with all that. Just to make sure I understand what you think of my original point:
To be clear, extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right? An unnecessarily louder alarm for some problem hurts only if it impedes your capacity to solve other important problems.
My original suggestion was that, while an unnecessarily loud alarm would generally be maladaptive because of the above (as your post suggests),[1] it might not in genuinely catastrophic situations, specifically. Because the importance of solving the problem signaled by the alarm overwhelmingly dominates. It matters little whether it impedes your capacity to solve other problems.
I get the impression that you agree with this as presently stated (at least depending on one’s interpretation of “overwhelmingly” and “little”) and were simply making sure I wasn’t taking away too much from my point. Is that correct? Or do you in fact see reasons to disagree with the above?[2]
Yes, absolutely. I did not mean to question any of that. I’m just curious about this potential specific deviation from the default in high-stakes situations.[3]
So, yes, I’m not claiming that “extreme felt intensity actually [often] falls into that neutral regime”, in that sense.
One counter-argument I see is that, while the unecessarily loud alarm doesn’t hurt in genuinely catastrophic situations, there always are risks of false alarm. If the loud alarm misfires, there is no overwhelmingly important problem that largely dominates the cost of reducing your capacity to solve others. This is just purely bad for fitness. I don’t know how significant this misfiring risk is, though, and hence how much weight to put on this counter-argument.
I’m curious because it seems highly relevant to, e.g., the question of whether organisms with
narrower welfare ranges(EDIT: lower resolution) could feel extreme pain, and how reliably we can estimate the probability of this, which in turn matters for how precise our moral weight estimates can be (see #3 in this informal research agenda).Thanks Jim
On the cost point you raised — “extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right?” — selection indeed acts on net fitness. Still, it’s both useful and standard to keep costs and benefits analytically separate before recombining them. A trait can be costly in terms of resources or architecture even when it increases fitness overall; brains and immune systems are classic examples.
On your footnote #3 — “the question of whether organisms with narrower welfare ranges could feel extreme pain” — I think there may be a bit of a contradiction in terms. If an organism has a genuinely narrower welfare range, then by definition (or at least under the operational definitions I’m using), it does not reach disabling or excruciating levels of negative affect. In that framing, the relevant question is precisely where the negative-intensity ceiling lies.
On the cost point — Right, the words I chose made it very unclear whether and when I was talking about only costs, or only benefits, or overall fitness once we combine both, sorry.
On my contradiction — Oops yeah, I meant organisms with lower resolution. My bad.
Thanks for taking the time to reply to all this. Very helpful!