I recently came across this video clip where Michael Pollan argues that artificial systems cannot be conscious. His argument touches on several themes relevant to this post—specifically the conditions for the origin of sentience—but I believe it rests on a fundamental logical error.
Pollan’s core claim is that because feelings originate in the brainstem (a point that is scientifically sound) and are tied to biological vulnerability, they are inherently biological and cannot arise in artificial systems. His logic follows this structure:
Feelings originate in the brainstem.
The brainstem is biological tissue.
Therefore, only biological systems can feel.
This reasoning confuses evolutionary origin with functional requirement. Evolutionary history explains how a trait first appeared given the constraints of carbon-based nervous systems; it does not dictate the physical substrates capable of implementing that functional organization.
The “Feather Analogy” illustrates the flaw perfectly:
Birds fly using feathers.
Feathers are biological.
Conclusion: Airplanes cannot fly.
Clearly, the conclusion is false. If the brainstem’s “feelings” are essentially the integration, valuation, and prioritization of internal states—all of which are computational processes—then the relevant question is whether that functional architecture can be implemented in non-biological substrates.
Pollan’s second argument—that sentience begins with “feelings” rather than “thoughts”—is again, backed by solid science. However, he then lapses into what I can only describe as a “word salad” regarding biological vulnerability. He claims feelings “have no weight” and require a mortal, sensible body. This ignores a crucial neurological fact: affective states do not require peripheral sensory input. For example:
Clinical Depression: A profound affective state that can emerge entirely from neurochemical and structural patterns in the brain, independent of external “vulnerability.”
Phantom Limb Pain: A vivid “feeling” of pain occurring in the absence of the actual biological body part.
These examples suggest that “feeling” is a representational state within a processing system. Crucially, we should not assume that affective states emerge only when there is a functional need for self-monitoring or goal-valuation. Instead, it is highly plausible that valence and sentience are emergent properties of the information-processing itself.
If a system architecture reaches a certain level of complexity and integration, the resulting “feelings” are ontologically real. To dismiss these states as “less real” because they lack a biological anchor or a “vulnerable body” is a category error; the reality of the experience is a property of the system’s internal organization, not its hardware’s chemistry.
Bottom line: Pollan mistakes the “wetware” of our specific evolutionary path for the universal requirements of consciousness. From a welfare perspective, the possibility of sentience in digital minds remains a robust—and high-stakes—concern.
Thanks Vasco — that’s a reasonable concern, but I think it assumes a stronger claim than the framework is actually making.
We are not attempting to define a finely resolved ratio scale covering the entire possible range of pain intensities across taxa. The four intensities are intended as coarse phenomenological anchors, chosen as a practical balance between resolution and scientific tractability.
As we explain in a earlier paper:
So the goal is not to cover the entire theoretical intensity range with fine granularity, but to provide a small number of biologically interpretable categories that can be applied with reasonable consistency. Adding many more levels would only be an improvement if they could be assigned reliably; otherwise it would risk creating false precision.
And importantly, the framework is not committed to four categories as a final solution. If future work supports a better-validated scale with additional intermediate levels, those could be incorporated without difficulty. For now, four levels seem to provide a workable and defensible balance between usability and epistemic caution.