Executive summary: The risk of suffering from an aligned AI controlled by a profit-seeking entity may be higher than the extinction risk from a misaligned AI.
Key points:
An aligned AI controlled by a corporation risks being used to maximize profits without checks and balances. This could lead to dystopia.
Absolute power granted by an aligned AI risks corrupting those in control, with no way to transfer power safely.
Today’s corporations already control governments; an aligned AI would remove any remaining checks on their power.
Random all-powerful individuals with an aligned AI may be more dangerous than a misaligned AI.
More analysis is needed on the potential suffering enabled by aligned AI rather than just extinction risks.
The author is new to AI safety and wants feedback, especially from technical experts, on these ideas and questions.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The risk of suffering from an aligned AI controlled by a profit-seeking entity may be higher than the extinction risk from a misaligned AI.
Key points:
An aligned AI controlled by a corporation risks being used to maximize profits without checks and balances. This could lead to dystopia.
Absolute power granted by an aligned AI risks corrupting those in control, with no way to transfer power safely.
Today’s corporations already control governments; an aligned AI would remove any remaining checks on their power.
Random all-powerful individuals with an aligned AI may be more dangerous than a misaligned AI.
More analysis is needed on the potential suffering enabled by aligned AI rather than just extinction risks.
The author is new to AI safety and wants feedback, especially from technical experts, on these ideas and questions.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.