And thinking more long term, when AGI builds a superintelligence, that will build the next agents, and humans are somewhere 5-6 scales down the intelligence scale, what chance do we have for moral consideration and care by those superior beings? unless we realize we need to care for all beings, and build an AI that cares for all beings…
And thinking more long term, when AGI builds a superintelligence, that will build the next agents, and humans are somewhere 5-6 scales down the intelligence scale, what chance do we have for moral consideration and care by those superior beings? unless we realize we need to care for all beings, and build an AI that cares for all beings…