Reza Negarestani’s Intelligence & Spirit

Note: I’m still working through the (dense) book, so this is an overview to help myself, and to introduce it to others. This isn’t a critical evaluation (I’m not a philosopher) of the authors’ arguments in detail.

I’ve started reading Intelligence and Spirit (2018) by Reza Negarestani and two commentaries on it. The book is about AGI/​ASI, framing these as a project of reason transcending the historical, biological container in which it was born.

The book is also about what philosophy is (the craft of thinking about thinking) or how it might be done (if we were to relax the constraint of the human example). The human, in this context, is viewed as non-essential, as something that is subject to evolution in all senses: it might be a widely-shared blind spot that we fixate on historically contingent humanist notions of value (especially when talking about AGI/​ASI).[1]

The book’s overall query is: how can humans reason rigorously about a greater-than-human intelligence? He suggests that we can start by separating reasoning, as it happens to be implemented in biological humans, from our understanding of what reasoning itself might be. He thinks of human intelligence as something that exists in our shared medium of language, which is the expression (or more accurately, the dasein or being/​presence) of Hegel’s world-historical Geist (pp. 73-77, Chapter 6). Language is far more than a medium of communication, it is also one of storage and, importantly, of cognition (echoes here of Wittgenstein on language reflecting form-of-life). Language, and its rules-of-use, develop through use or dialogue, which he discusses through Jean-Yves Girard’s ludics (pp. 365-376) in context of a toy model of ‘proto-AGI’ known as Kanzi.

Negarestani says that we can’t think of the evolution of intelligence sensibly if we don’t consider how our cognition, and our language, have been shaped by our various (biological and social) histories. So, can we recast language, infected as it is with history, into something ahistorical and formalisable ?

His approach references Wilfrid Sellars to redefine reason and rationality through a reinterpretation of Plato, emphasizing the role of normativity in our understanding (pp. 456-465). Robert Brandom’s influence is evident in the discussion on the social nature of norms and how they shape collective intelligence, proposing that our intellectual engagements are deeply intersubjective. Rudolf Carnap’s ideas help decouple language and logic from traditional representational roles, advocating for a formal and systematic approach to discussing other intelligences without the encumbrance of human biases (pp. 267, 334-335). In terms of concrete approaches to formalisation, there are sections dealing with category (pp.166-171), proof (pp.358-371), and type theories (pp. 414-422).

There is a subtle political thread throughout the book: his commitments seem to be to truth-seeking and the (moral) equality of all intelligences regardless of substrate (pp. 409-422). He is also reacting against anthropocentrism/​anthropomorphism that is perhaps the default perspective (in my view more of a Judeo-Christian inheritance, via Kant) in the humanities, AI safety, as well as society generally (pp. 115-116, Chapter 8). He advocates instead for a ‘nowhen-nowhere’ view, which allows self-consciousness to go beyond particular (individual- and species-specific) experiences toward something more collective and timeless (pp. 487-493).

The book, in Chapter 8, wraps up with speculation upon the future of philosophy in the presence of AGI/​ASI. I’m not sure where Negarestani is on moral realism, but he returns to mature Plato (of the Philebus, Phaedo, Theaetetus), which he interprets as: understanding and implementing intelligence is the human vocation, and the closest we can get to realising the Form of the Good.

Commentary

There is obviously much to unpick here, but the reasons I think it might be interesting in an AI context are:

  • I’m excited about Negarestani’s bid (whether it’s workable I don’t know yet) to formalise the nowhere/​nowhen perspective, and I would like to see where the overlaps are with Bostrom’s hierarchical norm structure; Singer/​de Lazari-Radek’s treatment of Sidgwick’s ‘point-of-view of the universe’; and moral realism generally.

  • I think having a more robust sense of value and reasons for doing things (beyond ‘<such-and-such thing> is good for currently-alive humans’ or ‘<such-and-such thing> is good for humans generally’ which are the nearly default positions, AFAICT, in AI alignment presently) might be useful, if and when we are faced with AI systems sufficiently advanced to qualify for moral status. Again, Bostrom (and Carl Shulman) has shown the way here, which others, including Jeff Sebo as well as the s-risk cause area, are/​have picking up.

  • Negarestani’s book (and his broader writing) has a distinctly aesthetic flavour. By this I mean he discusses something called an ‘aesthetic operator’ in the section on types (pp. 430-437); writes in a way that is in equal measure difficult and poetic (a risky strategy, since it comes without the historical guarantees of value-for-time-invested that the ancients, as well as, say, Heidegger or Hegel carry); but also he writes in an experimental way e.g. the excellent Cyclonopedia (2008)). Intelligence and Spirit doesn’t flesh out the ‘aesthetic operator’, but I am interested in the question of how aesthetics is a core part of the philosophical project – a view Wittgenstein seems to have had (at all stages of his thought), and that animates a book (see Chapter 10) on the entanglement of aesthetics and philosophy by Alva Noë. As far as AI alignment goes, it feels intuitively implausible that we will achieve a satisfactory value loading to an AI without finding a way of communicating or transferring our aesthetic sensibilities (whether one talks about it in Negarestani’s or in Yudkowsky’s language).

  • I haven’t talked about the commentaries, but a point by AA Cavia referencing Frantz Fanon’s possible relevance caught my eye. Fanon, along with Edouard Glissant and Sylvia Wynter, wrote eloquently about the position of colonised people, particularly their psychological condition (e.g. as objects rather than fully-human subjects), and there could arguably be interesting theoretical insights to draw about humanity’s collective (and individual) condition—in a future where AI systems, operating in integrated networks with opaque cognition, at levels of efficacy and coordination well beyond what we can understand or regulate – might be one of a colonised species, or at best, pensioners, revered but rendered harmless. Alternatively, Fanon et al’s ideas could inform conversations around how to balance the interests of digital entities against the well-being of humans and non-human animals. The hope, improbable as currently seems, would be for the flourishing of all sentient beings.

  1. ^

    While he discusses learning as a computational process (Ch. 6), he doesn’t talk much about deep learning, is critical (pp. 104-109) of most futurist/​transhumanist writing, and as a philosopher writing in 2018, doesn’t engage with alignment arguments.