Akash—thanks for posting this. Scott Alexander, as usual, has good insights, and is well worth reading here.
I think at some point, EAs might have to bite the bullet, set aside our all-too-close ties to the AI industry, and realize that ‘AGI is an X-risk’ boils down ‘OpenAI, Deepmind, and other AI companies that aren’t actually taking AIXR seriously are the real X risks’—and should be viewed and treated accordingly.
I like the analogy with Exon mobil, I think it’s helpful to keep that comparison in mind.
I mentioned before that I don’t think companies that work on AI should have a significant voice in the AI discourse, at least in the EA sphere—we can’t control the public discourse.
The primary purpose (maybe 80% + of their purpose) of a company is to make money, plain and simple. The job of their PR people is to garner public support through whichever means necessary. Often that is by sounding as reasonable as possible. Their press releases, blogs, podcasts etc. should be treated at worst as dangerous propaganda, at best as biased and compromised arguments.
Why then do we engage with their arguments so seriously? There are so many contrasting opinions on AI safety even among neutral researches that are hard to understand and important to engage with, why would we throw compromised perspectives in the mix?
I lean towards using these kinds of blogs to understand the plans of AI companies and to understand the arguments we need to counter in the public sphere, not as reasonable well thought out opinions by neutral people.
Akash—thanks for posting this. Scott Alexander, as usual, has good insights, and is well worth reading here.
I think at some point, EAs might have to bite the bullet, set aside our all-too-close ties to the AI industry, and realize that ‘AGI is an X-risk’ boils down ‘OpenAI, Deepmind, and other AI companies that aren’t actually taking AIXR seriously are the real X risks’—and should be viewed and treated accordingly.
100% agree.
I like the analogy with Exon mobil, I think it’s helpful to keep that comparison in mind.
I mentioned before that I don’t think companies that work on AI should have a significant voice in the AI discourse, at least in the EA sphere—we can’t control the public discourse.
The primary purpose (maybe 80% + of their purpose) of a company is to make money, plain and simple. The job of their PR people is to garner public support through whichever means necessary. Often that is by sounding as reasonable as possible. Their press releases, blogs, podcasts etc. should be treated at worst as dangerous propaganda, at best as biased and compromised arguments.
Why then do we engage with their arguments so seriously? There are so many contrasting opinions on AI safety even among neutral researches that are hard to understand and important to engage with, why would we throw compromised perspectives in the mix?
I lean towards using these kinds of blogs to understand the plans of AI companies and to understand the arguments we need to counter in the public sphere, not as reasonable well thought out opinions by neutral people.