Thatâs fair. I personally like that this forces people to come to terms with the fact that interventions targeted at small animals are way more scalable than those targeted at larger ones. People might decide on some moral weights which cancel out the scale of small animal work, but thatâs a nontrivial philosophical assumption, and I like prompting people to think about whether itâs actually reasonable.
I think âanimals that have more neurons or are more complex are morally more importantâ is not a ânontrivial philosophical assumptionâ.
It indeed strikes me as a quite trivial philosophical assumption the denial of which would I think seem absurd to almost anyone considering it. Maybe one can argue the effect is offset by the sheer number, but I think you will find almost no one on the planet who would argue that these things do not matter.
It indeed strikes me as a quite trivial philosophical assumption the denial of which would I think seem absurd to almost anyone considering it
On the contrary, approximately everyone denies this! Approximately ~0% of Americans think that humans with more neurons than other humans have more moral value, for example.[1]
Come on, you know you are using a hilariously unrepresentative datapoint here. Within humans the variance of neuron count only explains a small fraction of variance in experience and also we have strong societal norms that push peopleâs map towards pretending differences like this donât matter.
Unrepresentative of what? At least in my University ethics courses we spent way more time arguing about the rights of anencephalic children or human fetuses than insects. (And I would guess that neuron count explains a large fraction of the variance in experience between adult and fetal humans, for example.)
In any case: I think most peopleâs moral intuitions are terrible and you shouldnât learn a ton from the fact that people disagree with you. But as a purely descriptive matter, there are plenty of people who disagree with you â so much so that reading their arguments is a standard part of bioethics 101 in the US.
Itâs unrepresentative of the degree to which people believe that corollaries like neuron count and brain size and behavior complexity are an indicator of moral relevance across species (which is the question at hand here).
Thatâs fair. I personally like that this forces people to come to terms with the fact that interventions targeted at small animals are way more scalable than those targeted at larger ones. People might decide on some moral weights which cancel out the scale of small animal work, but thatâs a nontrivial philosophical assumption, and I like prompting people to think about whether itâs actually reasonable.
I think âanimals that have more neurons or are more complex are morally more importantâ is not a ânontrivial philosophical assumptionâ.
It indeed strikes me as a quite trivial philosophical assumption the denial of which would I think seem absurd to almost anyone considering it. Maybe one can argue the effect is offset by the sheer number, but I think you will find almost no one on the planet who would argue that these things do not matter.
On the contrary, approximately everyone denies this! Approximately ~0% of Americans think that humans with more neurons than other humans have more moral value, for example.[1]
Citation needed, but I would be pretty surprised if this were false. Would love to hear contrary evidence though!
Come on, you know you are using a hilariously unrepresentative datapoint here. Within humans the variance of neuron count only explains a small fraction of variance in experience and also we have strong societal norms that push peopleâs map towards pretending differences like this donât matter.
Unrepresentative of what? At least in my University ethics courses we spent way more time arguing about the rights of anencephalic children or human fetuses than insects. (And I would guess that neuron count explains a large fraction of the variance in experience between adult and fetal humans, for example.)
In any case: I think most peopleâs moral intuitions are terrible and you shouldnât learn a ton from the fact that people disagree with you. But as a purely descriptive matter, there are plenty of people who disagree with you â so much so that reading their arguments is a standard part of bioethics 101 in the US.
Itâs unrepresentative of the degree to which people believe that corollaries like neuron count and brain size and behavior complexity are an indicator of moral relevance across species (which is the question at hand here).