This is absurd. Not because human lives are necessarily inherently more valuable than other animal lives, but rather because the calculation is ridiculously unrefined and cannot be used to support the conclusion.
The idea of basing the calculation on a simple neuronal count is flat out wrong, because humans aren’t even at the top in an even, 1:1 weighting in that regard, see: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons . If it were that easy, the point could much more easily be made by just looking at elephant charities rather than chicken charities. It should be obvious right away from this that the argument from neuronal count is wrong.
And then, even if there is something to the idea, why arbitrarily use a square root in the calculation? Its only purpose seems to be to make the ratio closer: from 391 to 20.
And then it also assumes that there is a direct relationship between neuronal count and capacity for suffering, ignoring all other brain functions such as “thinking, memory, language, things that don’t contribute to the raw suffering that is necessary for moral worth,” which should itself appear absurd for obvious reasons.
And then there is also the basic assumption that ethics is based on suffering, which is a whole other subject (and doesn’t need to be discussed here, and is perhaps the least controversial aspect).
Any one of the aspects being wrong is enough to draw the conclusion into serious doubt, but almost the entire chain of aspects is questionable.
I think that when someone puts a number on an unknown value, the only good response is to say whether it’s too high or too low. Merely describing the uncertainty doesn’t get us anywhere closer to knowing where to donate. Animal charities could easily be better than the OP suggests.
Point taken, but look at OP’s title—it is a definitive claim, one which is not supported at all by the accompanying text. Describing the uncertainty in fact does get us somewhere, it allows one to throw out the claim. “Animal charities could easily be better than the OP suggests” indeed, but they could also be far worse.
Unless someone submits new data one way or the other though, the point is moot; which is to say, “back to the drawing board,” which is better than being led down a false path, i.e. is, again, an improvement over what was originally presented.
You can interpret “much more effective” as a claim about the expected value of a charity given current information. Personally, that’s what I think when I see such statements.
Since there are less than 1 million elephants alive today, even if each elephant has modestly more moral value than each human, elephant welfare is still very unlikely to meet the importance criteria.
Although I suspect this is more likely to be false than true, it is not inconceivable that less intelligent animals of a given species could matter more than humans, individually. For example, their experiences, good or bad, could be more intense than ours, or they could experience life more quickly*. They don’t need to have more neurons for this to be true, either, and I am skeptical of the importance of neuron count, too, in part because of this.
*if the rate didn’t matter, you’d run into problems with the theory of relativity: if you are moving very fast compared to another person, you each will see the other as aging more slowly, all else equal. If the rate didn’t matter, then you’d each see the other as mattering more, all else equal, because the other would live longer. Then ethics would have to depend on the frame of reference, which is pretty weird, but perhaps not fatal.
It is really too simple to look at only either a flat, first-order count of neurons OR some other gauge of experience (i.e. suffering and pleasure) and ignore potential higher order effects in my opinion. Perhaps I disagree with many on how utility should be defined.
This is absurd. Not because human lives are necessarily inherently more valuable than other animal lives, but rather because the calculation is ridiculously unrefined and cannot be used to support the conclusion.
The idea of basing the calculation on a simple neuronal count is flat out wrong, because humans aren’t even at the top in an even, 1:1 weighting in that regard, see: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons . If it were that easy, the point could much more easily be made by just looking at elephant charities rather than chicken charities. It should be obvious right away from this that the argument from neuronal count is wrong.
And then, even if there is something to the idea, why arbitrarily use a square root in the calculation? Its only purpose seems to be to make the ratio closer: from 391 to 20.
And then it also assumes that there is a direct relationship between neuronal count and capacity for suffering, ignoring all other brain functions such as “thinking, memory, language, things that don’t contribute to the raw suffering that is necessary for moral worth,” which should itself appear absurd for obvious reasons.
And then there is also the basic assumption that ethics is based on suffering, which is a whole other subject (and doesn’t need to be discussed here, and is perhaps the least controversial aspect).
Any one of the aspects being wrong is enough to draw the conclusion into serious doubt, but almost the entire chain of aspects is questionable.
I think that when someone puts a number on an unknown value, the only good response is to say whether it’s too high or too low. Merely describing the uncertainty doesn’t get us anywhere closer to knowing where to donate. Animal charities could easily be better than the OP suggests.
Point taken, but look at OP’s title—it is a definitive claim, one which is not supported at all by the accompanying text. Describing the uncertainty in fact does get us somewhere, it allows one to throw out the claim. “Animal charities could easily be better than the OP suggests” indeed, but they could also be far worse.
Unless someone submits new data one way or the other though, the point is moot; which is to say, “back to the drawing board,” which is better than being led down a false path, i.e. is, again, an improvement over what was originally presented.
You can interpret “much more effective” as a claim about the expected value of a charity given current information. Personally, that’s what I think when I see such statements.
Since there are less than 1 million elephants alive today, even if each elephant has modestly more moral value than each human, elephant welfare is still very unlikely to meet the importance criteria.
Although I suspect this is more likely to be false than true, it is not inconceivable that less intelligent animals of a given species could matter more than humans, individually. For example, their experiences, good or bad, could be more intense than ours, or they could experience life more quickly*. They don’t need to have more neurons for this to be true, either, and I am skeptical of the importance of neuron count, too, in part because of this.
*if the rate didn’t matter, you’d run into problems with the theory of relativity: if you are moving very fast compared to another person, you each will see the other as aging more slowly, all else equal. If the rate didn’t matter, then you’d each see the other as mattering more, all else equal, because the other would live longer. Then ethics would have to depend on the frame of reference, which is pretty weird, but perhaps not fatal.
It is really too simple to look at only either a flat, first-order count of neurons OR some other gauge of experience (i.e. suffering and pleasure) and ignore potential higher order effects in my opinion. Perhaps I disagree with many on how utility should be defined.