Executive summary: The AI safety community has made several mistakes, including overreliance on theoretical arguments, insularity, pushing extreme views, supporting leading AGI companies, insufficient independent thought, advocating for an AI development pause, and discounting policy as a route to safety.
Key points:
Too much emphasis on theoretical arguments (e.g. from Yudkowsky and Bostrom) and not enough empirical research, especially in the past.
Being too insular by not engaging with other fields (e.g. AI ethics, academia, social sciences), using jargony language, and being secretive about research.
Pushing views that are too extreme or weird, contributing to low quality and polarizing discourse around AI safety.
Supporting the leading AGI companies (OpenAI, Anthropic, DeepMind) which may be accelerating AGI development and fueling an unsafe race.
Insufficient independent thought, with many deferring to a small group of AI safety elites.
Advocating for a pause to AI development, which some argue could be counterproductive.
Historically discounting public outreach, policy, and governance as potential routes to AI safety, in favor of solving technical alignment problems directly.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The AI safety community has made several mistakes, including overreliance on theoretical arguments, insularity, pushing extreme views, supporting leading AGI companies, insufficient independent thought, advocating for an AI development pause, and discounting policy as a route to safety.
Key points:
Too much emphasis on theoretical arguments (e.g. from Yudkowsky and Bostrom) and not enough empirical research, especially in the past.
Being too insular by not engaging with other fields (e.g. AI ethics, academia, social sciences), using jargony language, and being secretive about research.
Pushing views that are too extreme or weird, contributing to low quality and polarizing discourse around AI safety.
Supporting the leading AGI companies (OpenAI, Anthropic, DeepMind) which may be accelerating AGI development and fueling an unsafe race.
Insufficient independent thought, with many deferring to a small group of AI safety elites.
Advocating for a pause to AI development, which some argue could be counterproductive.
Historically discounting public outreach, policy, and governance as potential routes to AI safety, in favor of solving technical alignment problems directly.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.