It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

This is a link post for Beniaguev, D., Segev, I., & London, M. (2021). Single cortical neurons as deep artificial neural networks. Neuron. https://​​www.sciencedirect.com/​​science/​​article/​​abs/​​pii/​​S0896627321005018

Also this article about the paper: https://​​www.quantamagazine.org/​​how-computationally-complex-is-a-single-neuron-20210902

My own quick takeaway is that it takes 5-8 layers with about 1000 neurons in total in an artificial neural network to simulate a single biological neuron of a certain kind, and before taking this into account, we’d likely underestimate the computational power of animal brains relative to artificial neural networks, possibly up to about 1000x. After taking this into account, this may set back AI timelines (based on biological anchors EDIT: although this particular report does in fact already assume biological neurons are more powerful) or reduce the relative moral weight of artificial sentience. However, there are two important weaknesses that undermine this conclusion (see also irving’s comment):

  1. It’s possible much of that supposed additional complexity isn’t useful or is closer to a constant (rather than proportional) overhead that can be ignored as we scale to simulate larger biological neural networks.

  2. We should also try to simulate artificial neurons (and artificial neural networks) with biological neuron models; there could be similar overhead in that direction, too.

From the Quanta article:

They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.

From the paper:

Highlights

  • Cortical neurons are well approximated by a deep neural network (DNN) with 5–8 layers

  • DNN’s depth arises from the interaction between NMDA receptors and dendritic morphology

  • Dendritic branches can be conceptualized as a set of spatiotemporal pattern detectors

  • We provide a unified method to assess the computational complexity of any neuron type

Summary

Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons’ input/​output (I/​O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/​O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/​O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs’ weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.

Graphical abstract