Consciousness researcher and co-founder of the Qualia Research Institute. I blog at qualiacomputing.com
Core interests span—measuring emotional valence objectively, formal models of phenomenal space and time, the importance of phenomenal binding, models of intelligence based on qualia, and neurotechnology.
Thank you for this fascinating post. I’ll share here what I posted on Twitter too:
I have many reasons why I don’t think we should care about non-conscious agency, and here are some of them:
1) That which lacks frame invariance cannot be truly real. Algorithms are not real. They look real from the point of view of (frame invariant) experiences that *interpret* them. Thus, there is no real sense in which an algorithm can have goals—they only look like it from our (integrated) point of view. It’s useful for us, pragmatically, to model them that way. But that’s different from them actually existing in any intrinsic substantial way.
2) The phenomenal texture of valence is deeply intertwined with conscious agency when such agency matters. The very sense of urgency that drives our efforts to reduce our suffering has a *shape* with intrinsic causal effects. This shape and its causal effects only ever cash out as such in other bound experiences. So the very _meaning_ of agency, at least in so far as moral intuitions are concerned, is inherently tied to its sentient implementation.
3) Values are not actually about states of world, and that is because states of the world aside from moments of experience don’t really exist. Or at least we have no reason to believe they exist. As you increase the internal coherence of one’s understanding of conscious agency, it becomes, little by little, clear that the underlying *referent* of our desires were phenomenal states all along, albeit with levels of indirection and shortcuts.
4) Even if we were to believe that non-sentient agency (imo an oxymoron) is valuable, we would have also good reasons to believe it is in fact disvaluable. Intense wanting is unpleasant, and thus sufficiently self-reflective organisms try to figure out how to realize their values with as little desire as possible.
5) Open Individualism, Valence Realism, and Math can provide a far more coherent system of ethics than any other combo I’m aware of, and they certainly rule out non-conscious agency as part of what matters.
6) Blindsight is poorly understood. There’s an interesting model of how it works where our body creates a kind of archipelago of moments of experience, in which there is a central hub and then many peripheral bound experiences competing to enter that hub. When we think that a non-conscious system in us “wants something”, it might very well be because it indeed has valence that motivates it in a certain way. Some exotic states of consciousness hint at this architecture—desires that seem to “come from nowhere” are in fact already the result of complex networks of conscious subagents merging and blending and ultimately binding to the central hub.
------- And then we have pragmatic and political reasons, where the moment we open the floodgates of insentient agency mattering intrinsically, we risk truly becoming powerless very fast. Even if we cared about insentient agency, why should we care about insentient agency in potential? Their scaling capabilities, cunning, and capacity for deception might quickly flip the power balance in completely irreversible ways, not unlike creating sentient monsters with radically different values than humans.
Ultimately I think value is an empirical question, and we already know enough to be able to locate it in conscious valence. Team Consciousness must wise up to avoid threats from insentient agents and coordinate around these risks catalyzed by profound conceptual confusion.