Implementational Considerations for Digital Consciousness

This post is a summary of my conclusions after a philosophical project investigating some aspects of computing architecture that might be relevant to assessing digital consciousness. [Edit: The main ideas are published in shorter form.] I tried to approach the issues in a way that is useful to people with mainstream views and intuitions. Overall, I think that present-day implementational considerations should significantly reduce the probability most people assign to the possibility of conscious digital systems using current architectural and programming paradigms.

The project was funded by the Long Term Future Fund.

Key claims and synopses of the rationale for each:

1. Details of the implementation of computer systems may be important to how confident we are about their capacity for consciousness.

  • Experts are unlikely to come to agree that a specific theory of consciousness is correct and epistemic humility demands that we keep an open mind.

  • Some plausible theories will make consciousness dependent on aspects of implementation.

  • The plausible implementational challenges to digital consciousness should influence our overall assessment of the likelihood of digital consciousness.

2. If computer systems are capable of consciousness, it is most likely that some theory of the nature of consciousness in the ballpark of functionalism is true.

  • Brains and computers are composed of fundamentally different materials and operate at low levels in fundamentally different ways.

  • Brains and computers share abstract functional organizations, but not their material composition.

  • If we don’t think that functional organizations play a critical role in assessing consciousness, we have little reason to think computers could be conscious.

3. A complete functionalist theory of consciousness needs two distinct components: 1) a theory of what organizations are required for consciousness and 2) a theory of what it takes to implement an organization.

  • An organization is an abstract pattern – it can be treated as a set of relational claims between the states of a system’s various parts.

  • Whether a system implements an organization depends on what parts it has, what properties belong to those parts, and how those properties depend on each other over time.

  • There are multiple ways of interpreting the parts and states of any given physical system. Even if we know what relational claims define an organization, we need to know how it is permissible to carve up a system to assess whether the system implements that organization.

4. There are hypothetical systems that can be interpreted as implementing the organization of a human brain that are intuitively very unlikely to be conscious.

5. To be plausible, functionalism should be supplemented with additional constraints related to the integrity of the entities that can populate functional organizations.

  • Philosophers have discussed the need for such constraints and some possible candidates, but there has been little exploration of the details of those constraints or what they mean for hypothetical artificial systems.

  • There are many different possible constraints that would help invalidate the application of functional organizations to problematic systems in different ways.

  • The thread tying together different proposals is that functional implementation is constrained by the cohesiveness or integrity of a system’s component parts that play the roles in the implementations of functional organizations.

  • Integrity constraints are independently plausible.

6.) Several plausible constraints would prevent digital systems from being conscious even if they implemented the same functional organization as a human brain, supposing that they did so with current techniques.

  • See examples in section 6.

  • Since these are particularly central to the project, I summarize one below:

Continuity: do the parts that play the right roles in a functional organization exist over time and are they mostly composed of the same materials or are those parts different things at different times? Major components of a brain appear relatively stable. In contrast, computer memory is allocated as needed, such that the memory cells that underly different parts of a program change frequently. The memory cells storing the values of nodes in a network will likely change from invocation to invocation. This might make a difference to consciousness.

For more on why continuity might seem important, consider this thought experiment:

The brain transducer is a machine that takes as an input a human brain that has been frozen into a single state within a preservative medium and produces as an output a fully new human brain frozen in another brain state. This machine would disassemble the input brain and construct the output brain out of new atomic materials that reflected what state the input brain would have momentarily occupied were it not frozen. We might route the output brains back around to form the machine’s inputs so that it produced a constant succession of new frozen brains reflecting the states that a human brain would naturally occupy as its internal dynamics evolved over time.

I think we should take seriously the possibility that a series of brains produced by a transducer would not have a single unified conscious experience — or any experiences at all — even if functionalism is true. For similar reasons, we should be open to the possibility that computer systems utilizing dynamically assigned memory would not be capable of having unified conscious experiences even if functionalism is true.

7.) Implementation considerations offer new opportunities for approaching the welfare of digital systems.

  • Implementation worries introduce new sources of ambiguity which may lower our confidence about the consciousness and well-being of hypothetical systems.

  • We may be able to alter the implementation of digital systems to make them more or less plausibly conscious without changing the algorithms they use.

  • Implementational choices may be used to increase the probability of consciousness existing where we want it to be and reduce the probability of consciousness existing where we don’t.