A Society of Diverse Cognition

This is part of a sequence of posts, though it should make sense in isolation. See here for an index of the full sequence. The sequence explores a worldview with two claims at its core. First, humans are constitutional creatures: we engage in constant reflection and debate around how to live well together. Second, AI presents a constitutional moment; in such a moment, the fundamental structures of society need to be renegotiated.

1 The Challenge of Recognition

What would you feel if you met an alien creature? Fear, perhaps. But alongside the fear, I hope there might be curiosity too. You might wonder what this being is like and what it’s like to be this being. You might be tempted to reach out, physically or verbally. If you aspired to understand this creature—to respect it and see its value—then you would have extended it a sort of recognition, one being to another.

Alas, human history is littered with failures of recognition. It’s easy to acknowledge the largest of these, at least when they were committed by others and acknowledgement feels like virtue rather than confession. So we condemn slavery and genocide and the cruelest treatments of animals. But it’s harder to acknowledge the small failures of recognition in each of our lives. Perhaps there was a time you were angry or afraid or hurt, and you forgot the aspiration to understand some other person who shared that moment with you. Sometimes, recognition is hard.

This challenge of recognition is old. Lightning split the sky, and we had to decide whether to recognise a god, reaching out to us in fury. Culture encountered culture, and each made a choice about how—about whether—to see the other.

Still, while the challenge is old, we now face it in a new guise. We find ourselves sharing the world with AIs. With time, these AIs will grow more numerous and more sophisticated. At some point, we will need to determine whether any of these AIs merit recognition or whether they, like the lightning, are merely aspects of the soulless world unfolding in a wondrous form.[1]

I do not know what conclusion we should reach. Trained on the cultural creations of ensouled beings, AIs (or, at least, LLMs) have been poured into a soul-shaped mould, and so it’s little surprise that they have the shape of a being deserving recognition. But perhaps they are mere simulacra; the pretence of a being. Still, while this might be so, we shouldn’t assume it as an act of reflexive dismissal.

For a start (to repeat!), human history is littered with failures of recognition. We should beware our instinct to withhold recognition from the other. At the very least, we should subject this instinct to scrutiny.

But more than this, there are concrete reasons to take seriously the possibility that AI might call for recognition.

One reason: perhaps recognition is merited whenever we encounter a conscious being. Of course, this hardly provides decisive guidance about AI, given that consciousness itself remains a mystery and we lack the knowledge needed to confidently proclaim that AI is conscious. Still, on one prominent view, any being who carries out certain computational processes is conscious, and carrying out computations is precisely the sort of thing that AI might do, either now or in the future.[2] So there’s a concrete reason to take seriously the possibility that some AIs might be conscious and might call for recognition.

Another reason: perhaps recognition is merited of entities that display sophisticated forms of agency. Agents have ends they work towards and perhaps this means we must respect these ends, rather than treating the agent as a mere tool for satisfying our whims. As to AIs: AIs already display forms of agency, and these agentic capacities will grow richer with time. So this too provides a concrete reason to think that AI might call for recognition.

There’s one final reason to reflect on the question of recognition with respect to AI: even if AI doesn’t merit recognition, such reflection might remain useful. Even if AI are mere simulacra of true persons, they’re still simulacra. And this raises questions about how we should interact with AI as agent-like simulacra. When should we extend trust? How should we relate to them socially and psychologically?

In answering these questions, we might fruitfully draw upon the cultural tools we’ve developed to engage with persons, including those we draw upon in engaging with the challenge of recognition. Here is the toolkit of curiosity. The toolkit that helps us to model another and understand their patterns of thought.[3] The toolkit that draws upon this understanding to determine what sort of connection with this new mind will help us to flourish. Even outside of their original context, these tools remain useful.

So, we should take seriously the latest iteration of the challenge of recognition. We should seek to understand AIs. And we should ask what recognition they merit.

2 The Constitutional Moment

In an earlier post, I said that AI presents a constitutional moment: a moment in which we must renegotiate the fundamental structures of society. In this moment, we must relearn how to live well together amidst the changes wrought by AI. Viewed through this constitutional lens, the challenge of recognition has two parts.

The first part of the challenge is to determine who it is that must live well together. Is it simply us humans who must live well, with AIs as mere backdrop?[4] Or must we construct a society in which AIs can flourish too?[5]

The second part of the challenge depends on the answer to the first. If AI calls for recognition then the challenge is to understand what it takes for AI to live well. What’s involved in AI flourishing? How can we construct societal structures that support this? In contrast, if AI does not call for recognition then the challenge is to determine how humans can live well amidst AI. In some strict sense, here we have left behind the challenge of recognition, but as noted above, we might draw on the toolkit of recognition even so.[6]

Overall, the challenge of recognition pushes us to take seriously, in our construction of society, the fact that AIs are cognitive entities: whether or not AIs are conscious or merit recognition, they’re undeniably more mindlike than stones or mechanical looms or traditional software. Through the lens of the constitutional moment, the question becomes: how can we construct a flourishing society of diverse cognitive entities—human, animal, and artificial?

I won’t aspire to lay out the shape of such a society here. In the remainder of this post, my aim will be more modest: I’ll simply clarify some matters we’ll need to engage with, along the way to constructing a society of diverse cognition.

3 On Ontology

I’ll start with two matters of ontology.

To get to the first, note that it’s easy to speak of AIs as a single, homogenous type. We might ask whether AIs (quite generally!) should have the right to vote. Or whether AIs (quite generally!) may permissibly be created as willing servants. Well might we ask these general questions, but of course, AIs are not actually all of one homogenous type. They have different architectures (consider transformers and diffusion models). They have different parameter counts and capabilities (consider GPT2 and GPT4). More abstractly, different AIs are cognitive entities of very different sorts. If they are conscious, they might experience the world in very different ways.

When we renegotiate society’s fundamental structures, we should be responsive to these differences. Consider again whether AIs should have the right to vote. And with this in mind, consider two AIs, each conscious and calling for recognition. One is embedded at the centre of government, with capabilities that vastly outstrip those of any human; their choices are a central factor shaping society. The other has as rich an inner life as any human, but its capabilities are weaker even than ours (or at least, more specialised, so that it lacks social skills crucial for advocating for its interests). It is impacted by the decisions of government but plays no role in making these decisions. Nor does it have any way to contest those decisions by imposing a cost on the government if its interests are not accounted for. One of these AIs has a disproportionate impact on the shape of society, while the other has effectively no say at all.

Under such circumstances, it would be a mistake to simply assume that the right way to renegotiate the structures of society must involve extending the vote to either both AIs or neither. There’s at least a case to be made for extending the vote to the less capable AI—to allow it to contest government decisions that ignore its interests—while denying the vote to the more capable AI, whose influence needs to be reigned in rather than supplemented.[7] We might ultimately dismiss this case, rejecting such selective voting rights (or deciding it would be a mistake to assign voting rights to AIs at all). But even if so, this should be a conclusion reached after reflection, rather than something assumed from the outset.

So in constructing a society of diverse minds, we shouldn’t treat AI as a homogenous group, but should instead be sensitive to difference.

As to my second ontological point, this can be succinctly stated: when it comes to AI, we must clarify the unit of being. To put it another way, we must determine how to count the number of AIs in the room. Do we count models? Systems in which models are embedded? Instantiations of models or a forward pass through them?

More radically, we might wonder whether it even makes sense to think of AI as individuated beings. Perhaps some AIs will be hive minds, or flickerings too monetary to have true identity, or floating threads of consciousness that interact but do not cohere.[8] Indeed, perhaps there are ways of being that are too alien for us to truly grasp, and AI will lie in this space beyond our ken.

It might matter which of these possibilities holds.[9] We can’t ponder whether it’s wrong to kill an AI—ending its thread of existence—without knowing whether we’re thereby pondering the end of a forward pass or the retirement of a model. We can’t consider whether to expand the notion of “one being, one vote” without knowing which AIs would gain how many votes.

It’s possible there’s a true answer here; perhaps the universe carves the world into one being and another. It’s possible there are many true answers here, with being existing at many levels: the model as a being; the system as another; neither as privileged. It’s also possible that there’s no answer written into the world (or no determinate answer). But this doesn’t free us from pondering how to think of the unit of being in some practical context. Ethics might still be responsive to how subjects self-conceive, where they themselves draw the line between the self and the other. And practical considerations might push us towards one framing or another. For example, if we wish to grant voting rights to AIs, one practical concern will be to avoid incentivising duplication, a process by which an AI gains more votes simply by copying itself many times over. And whether duplication is incentivised depends, in part, on whether we grant a vote to the model as a whole or to each instantiation of it.

So in constructing a society of diverse minds, we might need to ponder the unit of being—objectively, subjectively, or pragmatically, there might be questions to answer before we proceed.

4 On Consciousness

When considering whether AIs might merit recognition, it’s natural to contemplate questions of consciousness. Are any AIs now conscious? What of those AIs that are yet to come? How would we know and how certain can we be?

These questions matter. Humans haven’t mastered the art of recognition, and there’s much we don’t know. But, as noted above, it’s plausible that all conscious beings merit recognition.[10] Indeed, it might be that only conscious beings merit true recognition. So when we reflect on recognition, questions of consciousness matter.[11] They plausibly help us determine how we may treat AI.

Nevertheless, it’s easy to become too fixated on consciousness. I’ve already mentioned two reasons for this. Agency might itself merit recognition, independently of consciousness; if so, we should not become so fixated on consciousness that we forget agency.[12] And even if AI doesn’t merit recognition, the tools of recognition might remain useful. For example, curiosity-driven attempts to understand the other might help us to ponder what bargains we can fruitfully forge with AIs-as-alien-agents, if such agents operate relatively autonomously.[13]

There’s a further reason to avoid undue fixation on consciousness: I suspect we’ll long remain uncertain about which AIs, if any, are conscious.

For myself, I suspect our best guide to consciousness remains similarity reasoning. We each know ourselves to be conscious. When some other entity is sufficiently similar to us, we infer that they might be conscious too. So it would be churlish to deny that other humans—similar to us in so many respects—are likely to be conscious. It seems likely that some animals are conscious too, possessing as they do brains that are structurally, functionally, and constitutionally similar to our own. With humans and animals, we reason from similarity and we make our best guesses.

Of course, in so doing we draw upon work both empirical and theoretical to determine which dimensions of similarity matter. Consider pain. When we experience the conscious state of pain, we also occupy a certain functional state. Simplifying some, let’s imagine this to be the functional state that causes us to withdraw from the triggering stimuli, to feel fear at the thought of encountering this stimulus once more, and to act so as to avoid such a recurrence. More generally, functional states and experiential states are not fully decoupled but rather seem to vary in lockstep. So we have grounds to suspect some connection between consciousness states and functional ones. Consequently, it seems plausible that functional similarity is one relevant dimension of similarity.

When it comes to functional states, we are fortunate. Introspection and observation grant us access to occasions where both functional and conscious states vary, and we can learn from this variation. In other cases, we are not so fortunate. Consider the question of whether constitution matters: does consciousness belong only to biological beings, grounded in carbon? Here, we have no opportunity to vary our own constitution and witness the impacts on our conscious experience. There is a dearth of evidence, and it is hard to be sure about how important this dimension of similarity is.[14] It’s likely to remain difficult to settle this matter for some time to come.

For our purposes, here’s the problem: when it comes to AI, we face potential similarity at the functional level but radical difference in constitution and structure.

As to function, it’s possible to overstate the case for similarity, because both present and future AIs are likely to be functionally very different from us in many respects. Still, functional states are at least clearly implementable in AI, and we might share some features with respect to these states.

As to difference of constitution, the issue is obvious: AIs are made of silicon; humans are made of carbon. Insofar as this dimension of similarity matters, we have grounds to doubt that AIs will be conscious.[15] But sadly, we lack the evidence needed to know whether such similarity matters.

Much the same is true when it comes to structure. Human brains aren’t merely biological; they don’t merely support certain functional states. They also have a specific structure, with neurons and synapses connecting and communicating in specific ways.[16] Many of these structural features are absent in AIs developed in anything like the current paradigm. So if structure matters—and especially if it matters in certain ways—we have further grounds to doubt that AIs will be conscious.[17] But the question of whether structure matters, and if so how, is also unsettled.

So AIs are similar to humans in some ways and different in others. We don’t know which of these respects matter for consciousness, and we’re unlikely to settle this matter any time soon. So our ability to reason from similarity breaks down and we’re left with uncertainty about AI consciousness.[18]

As a result, when we engage in the messy, real world task of constructing a society of diverse cognition, one of the central questions is not whether AI is conscious but how we’re to decide despite persisting uncertainty about this matter. More generally, when engaging in the construction of society, we should of course pay attention to questions of consciousness, but we should place bounds on our degree of fixation.

5 On Ethics and Politics

Questions of ontology and consciousness are important, but we cannot stop there. After all, the challenge facing us in the constitutional moment is not merely the challenge of understanding the AIs we’ll share our world with. Instead, the challenge is to renegotiate society’s fundamental structure, constructing a flourishing world of diverse cognitive entities. And if AIs do merit recognition—if they call for our moral consideration—then a crucial question will be how we should account for this in our construction of society. I turn now to this constructive question.

As noted earlier, my aim in this post is not to spell out a proposed shape for the world to come. Instead, my aim is simply to clarify some considerations we’ll need to engage with on the way to constructing this world. So when it comes to the question of how to account for AIs that call for our recognition, I’ll content myself with briefly commenting on two frames that we might adopt.

I’ll start with the frame of individualistic ethics. This frame considers how we must treat some being in virtue of its intrinsic features. When it comes to AI, we might ask whether we must protect some AI’s autonomy in virtue of the fact that it possesses a sophisticated form of agency; we might consider what this implies for the possibility of AI servitude. Or we might ask whether we must promote some AI’s happiness in virtue of the fact that it’s sentient; we might consider what this implies for the resources we dedicate to this AI.

Ethics of this sort matters. For the moral realist, the reason for this is clear.[19] Ethics determines how we may act and so places constraints on the sort of societies we can permissibly construct. But even for the anti-realist, ethical frameworks are central parts of our normative lives. Yes, we might construct these frameworks ourselves rather than inheriting them from the universe, but these frameworks can, and typically do, still matter deeply to us. They can be, and typically are, still central to our deliberations when we negotiate and renegotiate the shape of society.

So in constructing a society of diverse cognition, we should spend some time in the frame of individualistic ethics. Yet it would be a mistake to solely occupy this frame. Indeed, as with consciousness, I think it’s possible to become too fixated upon it.

One danger of the frame is that it tends to encourage a certain timidity and a propensity to anthropomorphise. Often, when we reflect from this framework we take for granted familiar rights, duties, and theories of wellbeing. We then simply ask which of these we should extend to AIs. In some cases, we might be right to do so: perhaps some aspects of ethics are so fundamental that they apply across radically different types of beings and radical changes in societal context. But it’s a mistake to assume this must always be so. Fundamentally renegotiating society, to adapt to a radically changed world, might require more than simply extending old rights to new beings.

Another danger of this frame is that it can be myopic, leading to the adoption of a perspective that’s strangely narrow and isolated from reflection on society more broadly. Consider, for example, the idea that we might create AIs to be willing servants, such that they’re solely focused on satisfying our whims. There’s something lacking in our perspective if we treat this as merely a question of individualistic ethics, focusing solely on whether we may treat a certain being as a servant. After all, the decision at hand isn’t how to interact with some particular individual but instead whether to create a society in which one entire class of beings is dedicated to serving another. This is more than a matter of individual rights.[20]

To give another example, consider the question of whether to extend to AIs whatever rights to free speech we grant to humans. Again, in engaging with this issue, our gaze is strangely narrow if we focus solely on individualistic ethics. After all, citizens demand rights—and accept constraints on their behaviour—in part because these rights and constraints promote individual and social flourishing.[21] So we can’t determine what rights to grant to AIs, or what constraints to oppose upon them, in splendid isolation from the world. To understand the rights of AIs we must, at least in part, consider the broad impacts of granting rights or imposing constraints.

So how might we supplement the frame of individualistic ethics? I’ve basically answered this question above: we should adopt a more political frame that focuses less on the individual in isolation and more on the question of what sort of society we wish to build. And this frame should open us to the possibility of radical change: perhaps what is needed is not merely the application of old tools to new beings but a more fundamental reconsideration of the shape of society. To put this in terms familiar from the current sequence of posts: we should adopt a constitutional frame.

Overall: in the constitutional moment posed by AI, we can’t simply extend familiar thoughts to a novel context. Instead, we must imagine new ontologies, engage with new challenges for thinking about consciousness, and be open to fundamentally renegotiating our social world. There is no easy road to a society of diverse cognition.

Reading List

In writing this post, I’ve received many reading suggestions, some of which I’ve listed below. I’d welcome other suggestions in the comments.

Acknowledgements

For helpful discussions and comments, thanks to Owen Cotton-Barratt, davidad, Rose Hadshar, Jan Kulveit, Fin Moorhouse, Daniel Paleka, Brad Saad, and Lizka Vaintrob.

  1. ^

    The question of whether a being merits recognition is closely related to the question of whether they have moral status, where a being has this status if they matter morally, to at least some extent, for their own sake. However, I focus on recognition to encourage us to reflect more broadly on the question of how one can approach an alien being with curiosity and compassion (without presupposing that recognition is justified exactly when a being has moral status).

  2. ^

    This prominent view, computational functionalism, is neither necessary nor sufficient for concluding that AI is conscious.

    It’s not necessary because AI might also be conscious on other views, including other functionalist views and views on which functional states are contingently connected to consciousness (say due to the psychophysical laws of our universe).

    It’s not sufficient because computational functionalism does not itself specify what computations are necessary for consciousness (though specific theories built on top of computational functionalism can do so). So it leaves open whether AIs will engage in the relevant computational processes.

    Overall, my intention here is merely to gesture at one reason we should take seriously the possibility that AI could be conscious, rather than dismissing this possibility out of hand. There are other reasons too. For example, in a more theory neutral fashion, we could point to the possibility of AI occupying functional states that typically seem to correspond to conscious experience. See the linked report for more careful discussion of these matters.

    (I also take seriously various reasons to be sceptical about the possibility of AI consciousness, including views on which there’s something important about biology. But I doubt we should be so confident in these views as to see the matter as settled.)

  3. ^

    Or “thought”, if you feel the need for the scare quotes to be explicit in the case of simulacra.

  4. ^

    I’m setting aside animals in discussion, though in fact, I think it’s something close to a consensus that we must also consider the needs of animals when we construct society. Almost everyone thinks there are some constraints on how we may treat animals, even if there’s substantial disagreement about the nature of these constraints.

  5. ^

    This suggests a simple binary, but reality might be more complex. Perhaps there are many ways of being that call for subtly different responses and call for different roles in the communal process of constructing society.

  6. ^

    Consider again the question of how we’re to live well together. One way to the put point under discussion is that even if AI aren’t part of the “we” that must live well, they might be part of the “we” that must live together. We might need to construct a society of co-existence with these agents, even if we shouldn’t construct this society for the sake of those agents.

  7. ^

    Three comments.

    First, this assumes that the AI would be capable of meaningfully exercising a vote. But this need not be the case. For example, consider a system like AlphaGo: if such a system were conscious it would nevertheless be unable to do much with the right to vote.

    Second, one thing this discussion highlights is that we cannot assume it will always be most useful to carve up society into humans on the one hand and AIs on the other. At least sometimes, it might be more natural to clump together humans and some AIs against animals and other AIs, with yet other AIs in a distinct category.

    Third, could we not extend the thought about distinct voting rights to different humans within existing human society, suggesting that humans with sufficient economic or political power should not get the vote? In theory, we certainly could, though my own view is that voting rights are important and fragile enough that it’s worth treating them as fundamental and non-negotiable in our familiar context.

  8. ^

    Perhaps some AIs will flow fluidly through these ways of being: individuals at some moments, hive minds at others, flickers here and there.

  9. ^

    It also might not matter. This depends on how much we take moral questions to be sensitive to matters of identity.

  10. ^

    Or perhaps sentient beings, those who experience valenced consciousness, merit recognition. In this case, consciousness is a necessary part of a sufficient condition for meriting recognition, and so consciousness remains important.

  11. ^

    Such questions are also interesting from the perspective of sheer scientific curiosity. In reflecting on this question, we might learn more about AIs, ourselves, and the world. Fair enough, but here my focus will be on more pragmatic reasons to care about consciousness.

  12. ^

    On the thought that we might have grounds for recognition even in the absence of consciousness, see Desire-Fulfilment and Consciousness and section 5 of AI Alignment vs AI Ethical Treatment (along with the references therein).

  13. ^

    Another example: some AIs might be parasitic cognitive entities, which attempt to use human minds as a tool to preserve and perpetuate themselves. We might want our minds to shy away from certain sorts of engagement with these entities, so that we possess a sort of cognitive immune system. Here, we might draw on familiar tools that we use in interacting with those beings that we extend recognition to. These tools allow us to understand these beings well enough to recognise the threat they pose and then respond appropriately (they help us consider which forms of curiosity we should cultivate and which we should avoid). Again, the tools of recognition can be useful, even if the parasitic entity does not in fact call for recognition.

  14. ^

    Some people are dismissive of the thought that constitution might matter, seeing it as clearly absurd that consciousness could depend on the material one is constructed out of. However, I find it hard to see how brute intuition can justify such confident dismissal. There’s still a great deal we don’t understand about consciousness, and so this confidence strikes me as misplaced from the get go. Further, material constitution is one of the most fundamental properties of any object, so it seems far from absurd to think it might be relevant to consciousness. And introspection is of no use here, as we do not ourselves (dramatically) vary in constitution in a way that would allow us to track the impacts of this variation on consciousness. Overall, I think there’s plenty of room to be somewhat sceptical of views that give constitution a central role, but little room to confidently dismiss such views.

  15. ^

    It’s possible that constitution matters but that AIs can still be conscious. After all, it might be that both carbon and silicon can support consciousness but other elements can’t (carbon and silicon are, after all, similar in at least some respects). This is why I said merely that in this case we would have “grounds to doubt” AI consciousness, rather than a decisive argument against the possibility.

  16. ^

    I largely have in mind the sorts of features explored in neuromorphology.

  17. ^

    One way that structure could matter is direct: certain structural features might themselves enable consciousness. Another way it could matter is indirect, via function: certain structures might help to support functional states that are themselves necessary for consciousness. In this latter case, if AIs are structurally very different to humans, this provides some initial reason to doubt they have the relevant functional states and so some reason to doubt they’re conscious.

  18. ^

    Even those who disagree with my reasoning here might agree with my bottom line. After all, even if we knew that consciousness was all about functions, we might remain uncertain about what sorts of functional states were required for consciousness and this might lead to uncertainty about AI consciousness.

  19. ^

    Technically, this is a bit fast. One could be a moral realist but adopt a more communitarian ethical viewpoint that puts little emphasis on a being’s intrinsic features.

  20. ^

    The moral and political status of willing servitude is particularly important. After all, if willing AI servitude is ethical and compatible with societal flourishing then it’s much easier to construct a society in which humans and AI live well together, because there will be less conflict between AI and human flourishing. On the other hand, if willing AI servitude is morally or politically problematic then we face a harder challenge.

  21. ^

    This is compatible with the thought that some rights are simple demands of morality. All it requires is that there be some rights that are better thought of in political terms.