It sounds extremely plausible that it should be a priority to avoid speciesism (on the grounds of intelligence and other factors) in AI (and reduce it in humans). Just out of curiosity, it seems important to identify how our current species bias on the grounds of intelligence looks like:
a) ‘any species with an average intelligence that is lower than average human intelligence is viewed to be morally less significant.’
b) ‘any species that is (on average) less intelligent than one’s own species, is morally less significant.’
If (a), would this imply that AI would not harm us based on speciesism on the grounds of inferior intelligence? Would love to hear people’s thoughts on this (although I realise that such a discussion, in general context, might be a distraction from the most important aspects to focus on: avoiding speciesism).
Hey, thank you for this post!
It sounds extremely plausible that it should be a priority to avoid speciesism (on the grounds of intelligence and other factors) in AI (and reduce it in humans).
Just out of curiosity, it seems important to identify how our current species bias on the grounds of intelligence looks like:
a) ‘any species with an average intelligence that is lower than average human intelligence is viewed to be morally less significant.’
b) ‘any species that is (on average) less intelligent than one’s own species, is morally less significant.’
If (a), would this imply that AI would not harm us based on speciesism on the grounds of inferior intelligence?
Would love to hear people’s thoughts on this (although I realise that such a discussion, in general context, might be a distraction from the most important aspects to focus on: avoiding speciesism).