Executive summary: The e/acc (effective accelerationism) movement’s arguments for accelerating AI development are flawed, as they ignore the unique existential risks posed by advanced AI and the narrowing path for humanity’s survival as technology progresses.
Key points:
Most possible worlds are incompatible with human life, making it crucial to carefully aim advanced AI to avoid extinction.
Technological progress has historically benefited humans but often harmed less advanced beings, suggesting risks for humanity from superintelligent AI.
The “narrowing path” argument applies beyond AI, indicating increasing existential risks from various technologies as human power grows.
Bioengineered pathogens are a comparable near-term existential risk to AI, potentially justifying acceleration to develop protective AI capabilities.
Potential counterarguments like “sufficiently intelligent AI will be inherently safe” or “we can build safe oracle AIs” are addressed but considered unlikely.
The author urges taking these questions seriously given the high stakes and limited time to address them before advanced AI development.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The e/acc (effective accelerationism) movement’s arguments for accelerating AI development are flawed, as they ignore the unique existential risks posed by advanced AI and the narrowing path for humanity’s survival as technology progresses.
Key points:
Most possible worlds are incompatible with human life, making it crucial to carefully aim advanced AI to avoid extinction.
Technological progress has historically benefited humans but often harmed less advanced beings, suggesting risks for humanity from superintelligent AI.
The “narrowing path” argument applies beyond AI, indicating increasing existential risks from various technologies as human power grows.
Bioengineered pathogens are a comparable near-term existential risk to AI, potentially justifying acceleration to develop protective AI capabilities.
Potential counterarguments like “sufficiently intelligent AI will be inherently safe” or “we can build safe oracle AIs” are addressed but considered unlikely.
The author urges taking these questions seriously given the high stakes and limited time to address them before advanced AI development.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.