Executive summary: From a total utilitarian perspective, the value of AI alignment work is unclear and plausibly neutral, while from a human preservationist or near-termist view, alignment is clearly valuable but significantly delaying AI is more questionable.
Key points:
Unaligned AIs may be just as likely to be conscious and create moral value as aligned AIs, so alignment work is not clearly valuable from a total utilitarian view.
Human moral preferences are a mix of utilitarian and anti-utilitarian intuitions, so empowering them may not be better than an unaligned AI scenario by utilitarian lights.
From a human preservationist view, alignment is clearly valuable since it would help ensure human survival, but this view rests on speciesist foundations.
A near-termist view focused on benefits to people alive today would value alignment but not significantly delaying AI, since that could deprive people of potentially massive gains in wealth and longevity.
Arguments for delaying AI to reduce existential risk often conflate the risk of human extinction with the risk of human replacement by AIs, which are distinct from a utilitarian perspective.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: From a total utilitarian perspective, the value of AI alignment work is unclear and plausibly neutral, while from a human preservationist or near-termist view, alignment is clearly valuable but significantly delaying AI is more questionable.
Key points:
Unaligned AIs may be just as likely to be conscious and create moral value as aligned AIs, so alignment work is not clearly valuable from a total utilitarian view.
Human moral preferences are a mix of utilitarian and anti-utilitarian intuitions, so empowering them may not be better than an unaligned AI scenario by utilitarian lights.
From a human preservationist view, alignment is clearly valuable since it would help ensure human survival, but this view rests on speciesist foundations.
A near-termist view focused on benefits to people alive today would value alignment but not significantly delaying AI, since that could deprive people of potentially massive gains in wealth and longevity.
Arguments for delaying AI to reduce existential risk often conflate the risk of human extinction with the risk of human replacement by AIs, which are distinct from a utilitarian perspective.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.