On social media (especially Twitter), the debate over AI extinction risk is strongly influenced by a smallish group of ‘e/acc’ people (‘Effective Accelerationists’), who seem to dismiss X risks and ‘AI Doomers’ (including many EAs), encourage AGI development at maximum speed (including a fast takeoff towards ASI), reject any regulation of the AI industry, and look forward to a ‘post-human’ future of mostly machine intelligence. The e/acc movement seems closely associated with Singularity enthusiasts and transhumanists (although plenty of people in those subcultures aren’t e/acc).
What are the best medium-length critiques of this e/acc movement—ideally ones that are intellectual & morally serious?
[Question] What are some good critiques of ‘e/acc’ (‘Effective Accelerationism’)?
On social media (especially Twitter), the debate over AI extinction risk is strongly influenced by a smallish group of ‘e/acc’ people (‘Effective Accelerationists’), who seem to dismiss X risks and ‘AI Doomers’ (including many EAs), encourage AGI development at maximum speed (including a fast takeoff towards ASI), reject any regulation of the AI industry, and look forward to a ‘post-human’ future of mostly machine intelligence. The e/acc movement seems closely associated with Singularity enthusiasts and transhumanists (although plenty of people in those subcultures aren’t e/acc).
What are the best medium-length critiques of this e/acc movement—ideally ones that are intellectual & morally serious?