Executive summary: The post argues, in a reflective and deflationary way, that there are no deep facts about consciousness to uncover, that realist ambitions for a scientific theory of consciousness are confused, and that a non-realist or illusionist framework better explains our intuitions and leaves a more workable path for thinking about AI welfare.
Key points:
The author sketches a “realist research agenda” for identifying conscious systems and measuring valence, but argues this plan presumes an untenable realist view of consciousness.
They claim “physicalist realism” is unstable because no plausible physical analysis captures the supposed deep, intrinsic properties of conscious experience.
The author defends illusionism via “debunking” arguments, suggesting our realist intuitions about consciousness can be fully explained without positing deep phenomenal facts.
They argue that many consciousness claims are debunkable while ordinary talk about smelling, pain, or perception is not, because realist interpretations add unjustified metaphysical commitments.
The piece develops an analogy to life sciences: just as “life” is not a deep natural kind, “consciousness” may dissolve into a cluster of superficial, scientifically tractable phenomena.
The author says giving up realism complicates grounding ethics in intrinsic valence, but maintains that ethical concern can be redirected toward preferences, endorsement, or other practical criteria.
They argue that AI consciousness research should avoid realist assumptions, focus on the meta-problem, study when systems generate consciousness-talk, and design AI to avoid ethically ambiguous cases.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The post argues, in a reflective and deflationary way, that there are no deep facts about consciousness to uncover, that realist ambitions for a scientific theory of consciousness are confused, and that a non-realist or illusionist framework better explains our intuitions and leaves a more workable path for thinking about AI welfare.
Key points:
The author sketches a “realist research agenda” for identifying conscious systems and measuring valence, but argues this plan presumes an untenable realist view of consciousness.
They claim “physicalist realism” is unstable because no plausible physical analysis captures the supposed deep, intrinsic properties of conscious experience.
The author defends illusionism via “debunking” arguments, suggesting our realist intuitions about consciousness can be fully explained without positing deep phenomenal facts.
They argue that many consciousness claims are debunkable while ordinary talk about smelling, pain, or perception is not, because realist interpretations add unjustified metaphysical commitments.
The piece develops an analogy to life sciences: just as “life” is not a deep natural kind, “consciousness” may dissolve into a cluster of superficial, scientifically tractable phenomena.
The author says giving up realism complicates grounding ethics in intrinsic valence, but maintains that ethical concern can be redirected toward preferences, endorsement, or other practical criteria.
They argue that AI consciousness research should avoid realist assumptions, focus on the meta-problem, study when systems generate consciousness-talk, and design AI to avoid ethically ambiguous cases.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.