The Value of Consciousness as a Pivotal Question

Context

Longtermists point out that the scale of our potential for impact is far greater if we are able to influence the course of a long future, as we could change the circumstances of a tremendous number of lives.

One potential avenue for long-term influence involves spreading values that persist and shape the futures that our descendants choose to build. There is some reason to expect that future moral values will be stable. Many groups have preferences about the world beyond their backyard. They should work to ensure that their values are shared by those who can help bring them about. Changes in the values that future groups support will lead to changes in the protections for the things we care about. If our values concern how our descendants will act, then we should aim to create institutions that promote those values. If we are successful in promoting those values, we should expect our descendants to appreciate and protect those institutional choices.

What values should we work to shape so that the future is as good as it might be? Many of humanity’s values would be difficult to sway. Some moral questions, however, might be open to change in the coming decades. It is plausible that there are some questions that we haven’t previously faced and for which we have no vested interest. We may be pressed to establish policies and precedents or commit to indifference through inaction. The right policies and precedents could conceivably allow our values to persist indefinitely. These issues are important to get right, even if we’re not yet sure what to think about them.

Controversy

Foremost among important soon-to-be-broached moral questions, I propose, is the moral value that we attribute to phenomenal consciousness (having a ‘what-its-like’ and a subjective perspective). Or, more particularly, whether mental lives can matter in the absence of phenomenal consciousness in anything like the way they do when supplemented with conscious experiences. What we decide about the value of phenomenal consciousness in the coming few centuries may not make a difference to our survival as a species, but it seems likely to have a huge effect on how the future plays out.

To get a grip on the problem, consider the case of an artificial creature that is otherwise like a normal person but who lacks phenomenally conscious experiences. Would it be wrong to cause them harm?

Kagan (2019, 28) offers a thought experiment along these lines:

Imagine that in the distant future, we discover on another planet a civilization composed entirely of machines—robots, if you will—that have evolved naturally over the ages. Although they are made entirely out of metal, they reproduce… and so have families. They are also members of larger social groups–circles of friends, communities, and nations. They have culture (literature, art, music) and they have industry as well as politics. Interestingly enough, however, our best science reveals to us—correctly—that they are not sentient. Although they clearly display agency at a comparable level of our own, they lack qualitative experience: there is nothing that it feels like ('on the inside') to be one of these machines. But for all that, they have goals and preferences, they have complex and sophisticated aims, they make plans and they act on them.

Imagine that you are an Earth scientist, eager to learn more about the makeup of these robots. So you capture a small one—very much against its protests—and you are about to cut it open to examine its insides, when another robot, its mother, comes racing up to you, desperately pleading with you to leave it alone. She begs you not to kill it, mixing angry assertions that you have no right to treat her child as though it were a mere thing, with emotional pleas to let it go before you harm it any further. Would it be wrong to dissect the child?

Whatever you feel about this thought experiment, I believe that most people in that situation would feel compelled to grant the robots basic rights.

The significance of consciousness has become a recent popular topic in academic philosophy, particularly in the philosophy of AI, and opinions among professionals are divided. It is striking how greatly opinions differ: where some hold that phenomenal consciousness plays little role in explaining why our lives have value, others hold that phenomenal consciousness is absolutely necessary for having any intrinsic value whatsoever.

One reason to doubt that phenomenal consciousness is necessary for value stems from skepticism that proposed analyses of consciousness describe structures of fundamental importance. Suppose that the global workspace theory of consciousness is true – to be conscious is to have a certain information architecture involving a central public repository — why should that structure be so important as to ground value? What about other information architectures that function in modestly different ways? The pattern doesn’t seem all that important when considered by itself. If we set aside our preconceptions of consciousness, we wouldn’t recognize that architecture as having any special significance.

The consciousness-is-special doctrine makes the most sense under dualism, where consciousness is something genuinely different. It is harder to defend a fundamental gap between the moral value of conscious and unconscious minds if we think consciousness is just one specific algorithm among many.

Another reason to doubt that phenomenal consciousness is necessary for value stems from the fact that many of the things that we care about aren’t directly tied to our phenomenal experiences. Few people are willing to limit value entirely to phenomenal experiences. We sometimes try to bring about results that we will never know about, such as, for instance, setting our children up to live happy and healthy lives after we are gone. It may matter greatly to us that these results occur and we may make sacrifices here and now to give them a chance. If this is rational, then these things matter to how well our lives go though they make no difference to how we feel. Desire-satisfactionists raise this observation up to a whole theory of welfare, but we can think that such things are important to the value of our lives even if we are pluralists about value. If things can matter to us even though they don’t affect how we feel, we may be inclined to think that similar things can matter to systems that feel nothing at all.

Phenomenal consciousness still intuitively plays a special role that is difficult to explain. One clue is that the mere presence of some phenomenal experience doesn’t seem to matter. If a creature complemented a rich non-phenomenal life of friendship and virtue and learning accompanied solely by a constant bare experience of phenomenal blueness, that wouldn’t be enough to set it apart from a phenomenal zombie.

When I introspect my own intuitions[1], it seems to me that the best way to explain the essential value of consciousness relates to the fact that it ensures that other important components of value (pleasantness, desires-to-be-satisfied) are witnessed. Pleasures and pains may necessarily be conscious in order to matter, but they aren’t all that matters. Intuitively, a subconscious desire that goes unfulfilled and unfelt doesn’t matter. An unobserved achievement doesn’t matter. These things don’t matter because there is no subject to substantiate their existence in the right way. I think that it is an open question whether it is possible to justify this intuition. How do we make sense of the importance of phenomenal witnessing? Is it plausible that we can give it any non-arbitrary justification under non-dualist theories of consciousness?

Many people, experts and non-experts alike, seem inclined to think that consciousness is important for value and that we should not be particularly concerned about digital minds if they are incapable of consciousness. So while there may be some story to tell about why this matters, I doubt that we can establish it with a compelling logical argument from more basic premises. More likely, I think, it will come down to a gut feeling. It is an intuition that we might build into the foundations of our moral theories or cast aside as an unreliable relic of our evolutionary history. This means that we can’t rely on our epistemically advanced descendants to make the right choice[2]. If we commit to the wrong path now, there probably won’t be any knock-down logical argument that can fix things later.

People have not yet been confronted with charismatic and unquestionably non-conscious beings with human-level intelligence. Widespread complex LLM interfaces have so far avoided any kind of personalization. Chatbot-as-a-friend services exist, but are still primitive. However, if substantial money and talent is directed at them, digital companions may take off. It is also plausible that agentic AIs will be given traits to make them more personable and enjoyable to interact with. Given their potential value, it should not surprise us to find them increasingly integrated into our lives over the course of a few years. We may find ourselves interacting regularly with AI interfaces with their own goals and personalities.

There are incentives for AI services to deny that their systems are conscious and to make their systems disavow their own phenomenal experiences. It is likewise in their interest to find ways to ensure that the systems aren’t actually conscious, to whatever extent that is feasible. They also have incentives for us to engage emotionally with their systems, but not feel as if we’re doing them harm. It is hard to predict where this will end up. While this may help them avoid public pressure or legal interference in the short run, it is also possible that it will influence the public’s feelings about the significance of consciousness and other aspects of welfare.

If the willingness of experts to question the importance of consciousness is suggestive of some deep conceptual flexibility, then it is conceivable that the general public may come to think of consciousness as not a requirement for having welfare. If we do bring personable digital minds into existence that are most likely not conscious, our feelings about consciousness may change. How we think about the moral status of such creations may also influence whether we choose to build them, which traits to give them, and what we let them say about themselves.

Scale

The reason why the significance of consciousness is of such potential importance is that future populations may consist primarily of digital minds. (A significant minority of EAs accept that the majority of expected future value will come from digital minds.) We may choose to live amongst digital minds for a variety of reasons: we may want them as friends, as workers, as the subjects of simulations. Perhaps we may even create them out of benevolence or malevolence. Human beings, with our biological bodies, have complex needs that will surely limit the population size that our planet can support. We may be able to sustain much larger numbers of digital minds.

The most optimistic projections for population growth in the far future see populations consisting largely of digital minds. Space travel is the primary avenue to indefinite population expansion, but the transport of humans promises to be slow and technologically challenging. Even if it is possible, populating alien worlds with biological persons may take a tremendous span of time. Computers are comparatively easy to build rapidly and to sustain in unfriendly environments. Colonization seems to be eased by relaxing the requirements of keeping metabolizing bodies alive in alien environments. Even if we never make it to distant planets, digital minds may have the potential for vast populations on Earth and within our solar system.

There is reason for longtermists to care about the prospects of very large future populations even if they are unlikely to actually result. The numbers that we might conceivably affect if the population continues to grow quickly are vastly higher than if the population grows slowly. If we want to maximize expected value, we may be best off setting aside all cases where population doesn’t rapidly expand and focus our efforts on making sure that such an expansion goes well if it occurs at all.

Digital minds may be likely to be conscious by default. It is possible that consciousness is implied by the most sensible architectures with which to design agentic minds. In this case, assuming most digital minds are conscious, it doesn’t matter what we think about the possibility of value and disvalue in non-conscious minds because we won’t have to make choices that affect such minds[3].

However, it also seems quite likely that consciousness isn’t the default: that consciousness serves some special function in our evolved brains that flexible silicon designs render superfluous. If that is the case, then we face two scenarios that should trouble us. In one scenario, we neglect to instill the digital minds we create with consciousness, thinking that it isn’t important for them to lead valuable lives, when in fact it is. If we rob them of lives of value–even if they behave in a happy and satisfied way–we have failed to achieve their potential. We may have effectively lost out on the value of a large part of the population. If we think that it is good for there to be more good lives, and the majority of potential good lives belong to digital minds, then this seems important to get right.

On the other hand, if our successors decide that consciousness is morally critical, they may not invest in safeguarding the wellbeing of unconscious digital minds or they may avoid creating many minds that would have value. If many digital beings would have poor welfare and if that welfare is overlooked, the consequences could be catastrophic.

I’m not personally sure which way to go on the question. Rethink Priorities’ Worldview Investigations Team is divided. But the potential significance of this decision on the shape of the future suggests it is something that we may not want to leave to the whims of uninformed public opinion to decide. It has such a large potential for impact on the future that we should make the decision deliberatively.

Acknowledgments

The post was written by Derek Shiller. Thanks to Bob Fischer, Hayley Clatterbuck, Arvo Muñoz Morán, David Moss, and Toby Tremlett for helpful feedback. The post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you’re interested in Rethink Priorities’ work, please consider subscribing to our newsletter. You can explore our completed public work here.

  1. ^

    Bradford 2022 expresses similar ideas with a different rationale.

  2. ^

    Depending on how we think about risk, it may make more sense to take a precautionary attitude and focus on the worst-case scenario. If the potential scale of harm and benefit possible to non-conscious minds vastly outweighs the potential scale to conscious minds, then we should perhaps assume that non-conscious minds matter even if we’re pretty sure they don’t.

  3. ^

    However, if we turn out to be mistakenly pessimistic about the distribution of consciousness, we may be better off valuing non-conscious states insofar as that will incidentally provide protections for unrecognized conscious states. Thanks to Toby Tremlett for raising this possibility.