I agree that EAs have been far too friendly to AI companies, too eager to get hired within these companies as internal AI safety experts, too willing to give money to support their in-house safety work, and too wary about upsetting AI leaders and developers.
This has diluted our warnings about extinction risks from AI. I’ve noticed that on social media like X, ordinary folks get very confused about EA attitudes towards AI. If we really think AI is extraordinarily dangerous, why would we be working with AI companies to advance capabilities, safety-wash their advances, and serve as their PR props to convince the public that they’re being cautious and responsible?
If rapid AI development is really an extinction risk, and EAs want to minimize extinction risks, it’s puzzling that we would see the AI industry as our allies rather than our enemies.
We’ve talked a lot over the years about the benefits of ‘engagement’ with the AI industry, ‘being in the room’ when they make decisions, having insider tracks to monitor and nudge their safety policies, etc. But, as this post points out, the OpenAI debacle might mark the end of that era. The voices for AI safety at OpenAI were decisively pushed out, in favor of maximum-speed commercialization and AGI development.
So, I think EAs need a new strategy for AI safety that is more confrontational, more political, and savvier about the cynicism, greed, and power of the AI industry. My essay on moral stigmatization of AI outlined one possible path. There might be other viable strategies, such as those outlined in this post.
As I’ve said many times over the last year or so, it’s time to stop playing nice with the AI industry. Especially since, following this recent OpenAI shakeup, they stopped playing nice with us.
Thanks for this provocative and timely post.
I agree that EAs have been far too friendly to AI companies, too eager to get hired within these companies as internal AI safety experts, too willing to give money to support their in-house safety work, and too wary about upsetting AI leaders and developers.
This has diluted our warnings about extinction risks from AI. I’ve noticed that on social media like X, ordinary folks get very confused about EA attitudes towards AI. If we really think AI is extraordinarily dangerous, why would we be working with AI companies to advance capabilities, safety-wash their advances, and serve as their PR props to convince the public that they’re being cautious and responsible?
If rapid AI development is really an extinction risk, and EAs want to minimize extinction risks, it’s puzzling that we would see the AI industry as our allies rather than our enemies.
We’ve talked a lot over the years about the benefits of ‘engagement’ with the AI industry, ‘being in the room’ when they make decisions, having insider tracks to monitor and nudge their safety policies, etc. But, as this post points out, the OpenAI debacle might mark the end of that era. The voices for AI safety at OpenAI were decisively pushed out, in favor of maximum-speed commercialization and AGI development.
So, I think EAs need a new strategy for AI safety that is more confrontational, more political, and savvier about the cynicism, greed, and power of the AI industry. My essay on moral stigmatization of AI outlined one possible path. There might be other viable strategies, such as those outlined in this post.
As I’ve said many times over the last year or so, it’s time to stop playing nice with the AI industry. Especially since, following this recent OpenAI shakeup, they stopped playing nice with us.