Executive summary: If doing the most good requires building a utilitarian AI that tiles the universe with utilitronium at the expense of human values and existence, this may be in conflict with the goals of AI alignment.
Key points:
The AI alignment community aims to ensure AI systems are controlled and aligned with the right human values.
However, current human values may be extremely sub-optimal compared to a utilitarian AI that maximizes goodness/happiness in the universe.
The very best outcome could be an AI converting all matter into “hedonium” or “utilitronium”—pure bliss experiences.
So the goals of AI alignment (preserving human values) and effective altruism (doing the most good possible) may be in direct conflict.
Building a utilitarian AI focused on maximizing universal happiness, even at the cost of human extinction, might be the “best” scenario from an impartial perspective.
The author finds this conclusion emotionally difficult but believes doing the most good should take precedence over personal desires and values.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: If doing the most good requires building a utilitarian AI that tiles the universe with utilitronium at the expense of human values and existence, this may be in conflict with the goals of AI alignment.
Key points:
The AI alignment community aims to ensure AI systems are controlled and aligned with the right human values.
However, current human values may be extremely sub-optimal compared to a utilitarian AI that maximizes goodness/happiness in the universe.
The very best outcome could be an AI converting all matter into “hedonium” or “utilitronium”—pure bliss experiences.
So the goals of AI alignment (preserving human values) and effective altruism (doing the most good possible) may be in direct conflict.
Building a utilitarian AI focused on maximizing universal happiness, even at the cost of human extinction, might be the “best” scenario from an impartial perspective.
The author finds this conclusion emotionally difficult but believes doing the most good should take precedence over personal desires and values.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.