Ya, this is what Iām thinking, although have to is also a matter of scaling, e.g. a larger brain could accomplish the same with less powerful neurons. Thereās also probably a lot of waste in the human brain, even just among the structures most important for reasoning (although the same could end up being true or an AGI/āTAI we try to build; we might need a lot of waste before we can prune or make smaller student networks, etc.).
On falling leaves, the authors were just simulating the input and output behaviour of the neurons, not the physics/āchemistry/ābiology (Iām not sure if thatās what you had in mind), but based on the discussion on this post, the 1000x could be very misleading and could mostly go away as you scale to try to simulate a larger biological network, or you could have a similar cost in trying to simulate an artificial neural network with a biological one. They didnāt check for these possibilities (so it could still be in some sense like simulating falling leaves).
Still, 1000x seems high to me for biological neurons not being any more powerful than artificial neurons, although this is pretty much just gut intuition, and I canāt really explain why. Based on the conversations here (with you and others), I think 10x is a reasonable guess.
What I meant by the falling leaf thing: If we wanted to accurately simulate where a leaf would land when dropped from a certain height and angle, it would require a ton of complex computation. But (one can imagine) itās not necessary for us to do this; for any practical purpose we can just simplify it to a random distribution centered directly below the leaf with variance v.
Similarly (perhaps) if we want to accurately simulate the input-output behavior of a neuron, maybe we need 8 layers of artificial neurons. But maybe in practice if we just simplified it to āIt sums up the strength of all the neurons that fired at it in the last period, and then fires with probability p, where p is an s-curve function of the strength sum...ā maybe that would work fine for practical purposesāNOT for purpose of accurately reproducing the human brainās behavior, but for purposes of building an approximately brain-sized artificial neural net that is able to learn and excel at the same tasks.
My original point no. 1 was basically that I donāt see how the experiment conducted in this paper is much evidence against the āsimplified model would work fine for practical purposesā hypothesis.
Ya, thatās fair. If this is the case, I might say that the biological neurons donāt have additional useful degrees of freedom for the same number of inputs, and the paper didnāt explicitly test for this either way, although, imo, what they did test is weak Bayesian evidence for biological neurons having more useful degrees of freedom, since if they could be simulated with few artificial neurons, we could pretty much rule out that hypothesis. Maybe this evidence is too weak to update much on, though, especially if you had a prior that simulating biological neurons would be pretty hard even if they had no additional useful degrees of freedom.
Now I think we are on the same page. Nice! I agree that this is weak bayesian evidence for the reason you mention; if the experiment had discovered that one artificial neuron could adequately simulate one biological neuron, that would basically put an upper bound on things for purposes of the bio anchors framework (cutting off approximately the top half of Ajeyaās distribution over required size of artificial neural net). Instead they found that you need thousands. But (I would say) this is only weak evidence because prior to hearing about this experiment I would have predicted that it would be difficult to accurately simulate a neuron, just as itās difficult to accurately simulate a falling leaf. Pretty much everything that happens in biology is complicated and hard to simulate.
Ya, this is what Iām thinking, although have to is also a matter of scaling, e.g. a larger brain could accomplish the same with less powerful neurons. Thereās also probably a lot of waste in the human brain, even just among the structures most important for reasoning (although the same could end up being true or an AGI/āTAI we try to build; we might need a lot of waste before we can prune or make smaller student networks, etc.).
On falling leaves, the authors were just simulating the input and output behaviour of the neurons, not the physics/āchemistry/ābiology (Iām not sure if thatās what you had in mind), but based on the discussion on this post, the 1000x could be very misleading and could mostly go away as you scale to try to simulate a larger biological network, or you could have a similar cost in trying to simulate an artificial neural network with a biological one. They didnāt check for these possibilities (so it could still be in some sense like simulating falling leaves).
Still, 1000x seems high to me for biological neurons not being any more powerful than artificial neurons, although this is pretty much just gut intuition, and I canāt really explain why. Based on the conversations here (with you and others), I think 10x is a reasonable guess.
What I meant by the falling leaf thing:
If we wanted to accurately simulate where a leaf would land when dropped from a certain height and angle, it would require a ton of complex computation. But (one can imagine) itās not necessary for us to do this; for any practical purpose we can just simplify it to a random distribution centered directly below the leaf with variance v.
Similarly (perhaps) if we want to accurately simulate the input-output behavior of a neuron, maybe we need 8 layers of artificial neurons. But maybe in practice if we just simplified it to āIt sums up the strength of all the neurons that fired at it in the last period, and then fires with probability p, where p is an s-curve function of the strength sum...ā maybe that would work fine for practical purposesāNOT for purpose of accurately reproducing the human brainās behavior, but for purposes of building an approximately brain-sized artificial neural net that is able to learn and excel at the same tasks.
My original point no. 1 was basically that I donāt see how the experiment conducted in this paper is much evidence against the āsimplified model would work fine for practical purposesā hypothesis.
Ya, thatās fair. If this is the case, I might say that the biological neurons donāt have additional useful degrees of freedom for the same number of inputs, and the paper didnāt explicitly test for this either way, although, imo, what they did test is weak Bayesian evidence for biological neurons having more useful degrees of freedom, since if they could be simulated with few artificial neurons, we could pretty much rule out that hypothesis. Maybe this evidence is too weak to update much on, though, especially if you had a prior that simulating biological neurons would be pretty hard even if they had no additional useful degrees of freedom.
Now I think we are on the same page. Nice! I agree that this is weak bayesian evidence for the reason you mention; if the experiment had discovered that one artificial neuron could adequately simulate one biological neuron, that would basically put an upper bound on things for purposes of the bio anchors framework (cutting off approximately the top half of Ajeyaās distribution over required size of artificial neural net). Instead they found that you need thousands. But (I would say) this is only weak evidence because prior to hearing about this experiment I would have predicted that it would be difficult to accurately simulate a neuron, just as itās difficult to accurately simulate a falling leaf. Pretty much everything that happens in biology is complicated and hard to simulate.