This is absurd. Not because human lives are necessarily inherently more valuable than other animal lives, but rather because the calculation is ridiculously unrefined and cannot be used to support the conclusion.
The idea of basing the calculation on a simple neuronal count is flat out wrong, because humans aren’t even at the top in an even, 1:1 weighting in that regard, see: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons . If it were that easy, the point could much more easily be made by just looking at elephant charities rather than chicken charities. It should be obvious right away from this that the argument from neuronal count is wrong.
And then, even if there is something to the idea, why arbitrarily use a square root in the calculation? Its only purpose seems to be to make the ratio closer: from 391 to 20.
And then it also assumes that there is a direct relationship between neuronal count and capacity for suffering, ignoring all other brain functions such as “thinking, memory, language, things that don’t contribute to the raw suffering that is necessary for moral worth,” which should itself appear absurd for obvious reasons.
And then there is also the basic assumption that ethics is based on suffering, which is a whole other subject (and doesn’t need to be discussed here, and is perhaps the least controversial aspect).
Any one of the aspects being wrong is enough to draw the conclusion into serious doubt, but almost the entire chain of aspects is questionable.
Agree with the gist of the previous comments. This is just basic semantic confusion, people or agents who do not exist only exist as theoretical exercises in mindspace such as this one, they do not exist in any other space by definition, and so cannot have real rights to which ethics should be applied.
So focusing on [not just some but] all nonexistent is not controversial, it is just wrong on a basic level. I do not think that this is being close minded ironically, it is simply by definition.
What would be more productive to discuss are possible agents who could be realized with their potential rights, which is fundamentally different although not mutually exclusive with nonexistent agents.