In general, it seems that everyone is focussing on aligning advanced AI with human values. However, from an impartial point of view, what we should do is align AI with value. If humans go extinct due to AIs which have greater utility per unit energy, and a greater ability to bring about more value in the universe than humans, why would human extinction be bad?
Hi callum,
I think this is a great question!
In general, it seems that everyone is focussing on aligning advanced AI with human values. However, from an impartial point of view, what we should do is align AI with value. If humans go extinct due to AIs which have greater utility per unit energy, and a greater ability to bring about more value in the universe than humans, why would human extinction be bad?