How can we unhypocritically expect AI superintelligence to respect “inferior” sentient beings like us when we do no such thing for other species?
How can we expect AI to have “better” (more consistent, universal, compassionate, unbiased, etc) values than us and also always only do what we want it to do?
What if extremely powerful, extremely corrigible AI falls into the hands of a racist? A sexist? A speciesist?
Some things to think about if this post doesn’t click for you
And thinking more long term, when AGI builds a superintelligence, that will build the next agents, and humans are somewhere 5-6 scales down the intelligence scale, what chance do we have for moral consideration and care by those superior beings? unless we realize we need to care for all beings, and build an AI that cares for all beings…
How can we unhypocritically expect AI superintelligence to respect “inferior” sentient beings like us when we do no such thing for other species?
How can we expect AI to have “better” (more consistent, universal, compassionate, unbiased, etc) values than us and also always only do what we want it to do?
What if extremely powerful, extremely corrigible AI falls into the hands of a racist? A sexist? A speciesist?
Some things to think about if this post doesn’t click for you
And thinking more long term, when AGI builds a superintelligence, that will build the next agents, and humans are somewhere 5-6 scales down the intelligence scale, what chance do we have for moral consideration and care by those superior beings? unless we realize we need to care for all beings, and build an AI that cares for all beings…