Implementational Considerations for Digital Consciousness
This post is a summary of my conclusions after a philosophical project investigating some aspects of computing architecture that might be relevant to assessing digital consciousness. [Edit: The main ideas are published in shorter form.] I tried to approach the issues in a way that is useful to people with mainstream views and intuitions. Overall, I think that present-day implementational considerations should significantly reduce the probability most people assign to the possibility of conscious digital systems using current architectural and programming paradigms.
The project was funded by the Long Term Future Fund.
Key claims and synopses of the rationale for each:
1. Details of the implementation of computer systems may be important to how confident we are about their capacity for consciousness.
Experts are unlikely to come to agree that a specific theory of consciousness is correct and epistemic humility demands that we keep an open mind.
Some plausible theories will make consciousness dependent on aspects of implementation.
The plausible implementational challenges to digital consciousness should influence our overall assessment of the likelihood of digital consciousness.
2. If computer systems are capable of consciousness, it is most likely that some theory of the nature of consciousness in the ballpark of functionalism is true.
Brains and computers are composed of fundamentally different materials and operate at low levels in fundamentally different ways.
Brains and computers share abstract functional organizations, but not their material composition.
If we don’t think that functional organizations play a critical role in assessing consciousness, we have little reason to think computers could be conscious.
3. A complete functionalist theory of consciousness needs two distinct components: 1) a theory of what organizations are required for consciousness and 2) a theory of what it takes to implement an organization.
An organization is an abstract pattern – it can be treated as a set of relational claims between the states of a system’s various parts.
Whether a system implements an organization depends on what parts it has, what properties belong to those parts, and how those properties depend on each other over time.
There are multiple ways of interpreting the parts and states of any given physical system. Even if we know what relational claims define an organization, we need to know how it is permissible to carve up a system to assess whether the system implements that organization.
4. There are hypothetical systems that can be interpreted as implementing the organization of a human brain that are intuitively very unlikely to be conscious.
See examples in section 4.
5. To be plausible, functionalism should be supplemented with additional constraints related to the integrity of the entities that can populate functional organizations.
Philosophers have discussed the need for such constraints and some possible candidates, but there has been little exploration of the details of those constraints or what they mean for hypothetical artificial systems.
There are many different possible constraints that would help invalidate the application of functional organizations to problematic systems in different ways.
The thread tying together different proposals is that functional implementation is constrained by the cohesiveness or integrity of a system’s component parts that play the roles in the implementations of functional organizations.
Integrity constraints are independently plausible.
6.) Several plausible constraints would prevent digital systems from being conscious even if they implemented the same functional organization as a human brain, supposing that they did so with current techniques.
See examples in section 6.
Since these are particularly central to the project, I summarize one below:
Continuity: do the parts that play the right roles in a functional organization exist over time and are they mostly composed of the same materials or are those parts different things at different times? Major components of a brain appear relatively stable. In contrast, computer memory is allocated as needed, such that the memory cells that underly different parts of a program change frequently. The memory cells storing the values of nodes in a network will likely change from invocation to invocation. This might make a difference to consciousness.
For more on why continuity might seem important, consider this thought experiment:
The brain transducer is a machine that takes as an input a human brain that has been frozen into a single state within a preservative medium and produces as an output a fully new human brain frozen in another brain state. This machine would disassemble the input brain and construct the output brain out of new atomic materials that reflected what state the input brain would have momentarily occupied were it not frozen. We might route the output brains back around to form the machine’s inputs so that it produced a constant succession of new frozen brains reflecting the states that a human brain would naturally occupy as its internal dynamics evolved over time.
I think we should take seriously the possibility that a series of brains produced by a transducer would not have a single unified conscious experience — or any experiences at all — even if functionalism is true. For similar reasons, we should be open to the possibility that computer systems utilizing dynamically assigned memory would not be capable of having unified conscious experiences even if functionalism is true.
7.) Implementation considerations offer new opportunities for approaching the welfare of digital systems.
Implementation worries introduce new sources of ambiguity which may lower our confidence about the consciousness and well-being of hypothetical systems.
We may be able to alter the implementation of digital systems to make them more or less plausibly conscious without changing the algorithms they use.
Implementational choices may be used to increase the probability of consciousness existing where we want it to be and reduce the probability of consciousness existing where we don’t.
Interesting project! I’m curious – did doing this work update you towards or away from ~functionalist theories of consciousness?
I’ve generally been more sympathetic with functionalism than any other realist view about the nature of consciousness. This project caused me to update on two things.
1.) Functionalism can be developed in a number of different ways, and many of those ways will not allow for digital consciousness in contemporary computer architectures, even if they were to run a program faithfully simulating a human mind. The main thing is abstraction. Some versions of functionalism allow a system to count as running a program if some highly convoluted abstractions on that system can be constructed that mirror that program. Some versions require the program to have a fairly concrete mapping to the system. I think digital consciousness requires the former kind of view, and I don’t think that there are good reasons to favor that kind of functionalism over the other.
2.) Functionalism is a weirder view than I think a lot of people give it credit for and there really aren’t much in the way of good arguments for it. A lot of the arguments come down to intuitions about cases, but it is hard to know why we should trust our intuitions about whether random complex systems are conscious. Functionalism seems most reasonable if you don’t take consciousness very seriously to begin with and you think that our intuitions are constitutive in carving off a category that we happen to care about, rather than getting at an important boundary in the world.
Overall, I feel more confused than I used to be. My probability of functionalism went down, but it didn’t go to a rival theory.
Thanks for writing this up, this is a really interesting idea.
Personally, I find points 4, 5, and 6 really unconvincing. Are there any stronger arguments for these, that don’t consist of pointing to a weird example and then appealing to the intuition that “it would be weird if this thing was conscious”?
Because to me, my intuition tells me that all these examples would be conscious. This means I find the arguments unconvincing, but also hard to argue against!
But overall I get that given the uncertainty around what consciousness is, it might be a good idea to use implementation considerations to hedge our bets. This is a nice post.
I’m not particularly sympathetic with arguments that rely on intuitions to tell us about the way the world is, but unfortunately, I think that we don’t have a lot else to go on when we think about consciousness in very different systems. It is too unclear what empirical evidence would be relevant and theory only gets us so far on its own.
That said, I think there are some thought experiments that should be compelling, even though they just elicit intuitions. I believe that the thought experiments I provide are close enough to this for it to be reasonable to put weight on them. The mirror grid, in particular, just seems to me to be the kind of thing where, if you accept that it is conscious, you should probably think everything is conscious. There is nothing particularly mind-like about it, it is just complex enough to read any structure you want into it. And lots of things are complex. (Panpsychism isn’t beyond the pale, but it is not what most people are on board with when they endorse functionalism or wonder if computers could be conscious.)
Another way to think about my central point: there is a history in philosophy of trying to make sense of why random objects (rocks, walls) don’t count as properly implementing the same functional roles that characterize conscious states. There are some accounts that have been given for this, but it is not clear that those accounts wouldn’t predict that contemporary computers couldn’t be conscious either. There are plausible readings of those accounts that suggest that contemporary computers would not be conscious no matter what programs they run. If you don’t particularly trust your intuitions, and you don’t want to accept that rocks and walls properly implement the functional roles of conscious states, you should probably be uncertain over exactly which view is correct. Since many views would rule out consciousness in contemporary computers, you should lower the probability you assign to that.