I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that
AI will be a revolutionary technology that affects nearly every aspect of society.
Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised.
I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust.
One could also reason that the left can be counted on to be anti-AI going forwards, and the objective for EA should be to foster anti-AI forces on the right. The H1B split shows that tech leaders don’t have ideological control over the right. Sam Altman and Elon Musk don’t get along either. In fact, Sam Altman doesn’t seem to have a strong popular constituency in either party at this point.
Left-progressive online people seem to be consolidating on an anti-AI position; but mostly derived from resistance to the presumed economic impacts from AI art, badness-by-association inherited from the big tech / tech billionaires / ‘techbro’ cluster, and on the academic side from concern about algorithmic bias and the like. However, they seem to be failing at extrapolation. “AI bad” gets misgeneralized into skepticism about current and future AI capabilities.
Left-marxist people seem to be thinking a bit more clearly about this (ie extrapolating, applying any economic model at all, looking a bit into the tech). See an example here, or a summary (EDIT 2025-02-08: of the same example piece) here. However, the labs are based in the US, a country where associating with marxists is a very bad idea if you want your policies to get implemented.
These two leftist stances are mostly orthogonal to concerns about AI x-risk and catastrophic misuse. However, a lot of activists believe that the public’s attention is zero-sum. I suspect that is the main reason coalition-building with the preceding two groups has not happened much. However, I think it is still possible.
About the American right: some actors have largely succeeded in marrying China-hawkism with AI-boosterism. I expect this association to be very sticky, but it may be counteracted by reactionary impulses coming from spooked cultural conservatives.
“AI bad” gets misgeneralized into skepticism about current and future AI capabilities.
This point is interesting. I almost wonder if it’s better to not argue against this. If we argue against it, maybe the left gets attached to this position, and becomes slower to update even as unemployment increases.
I think that the appropriate medium-term fit for the movement will be with organised labour (whether left or right!), as I’ve said before here. The economic impacts are not currently strong enough to have been felt in the unemployment rate, particularly since anti-inflationary policies typically prop up the employment rate a bit. But they will presumably be felt soon, and the natural home for those affected will be in the labour movement, which despite its currently weakened state will always be bigger and more mobile than, say, PauseAI.
In that world, the best thing EAs can do is support that movement. Not necessarily explicitly or directly—I can see a world where Open Phil lobbies to strengthen the U.S. NLRB and overturn key Supreme Court decisions such as Janus. But, such a move will be perceived as highly political, and I wonder if the allergy to labour-left politics within EA precludes it.
I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that
AI will be a revolutionary technology that affects nearly every aspect of society.
Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised.
I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust.
One could also reason that the left can be counted on to be anti-AI going forwards, and the objective for EA should be to foster anti-AI forces on the right. The H1B split shows that tech leaders don’t have ideological control over the right. Sam Altman and Elon Musk don’t get along either. In fact, Sam Altman doesn’t seem to have a strong popular constituency in either party at this point.
Left-progressive online people seem to be consolidating on an anti-AI position; but mostly derived from resistance to the presumed economic impacts from AI art, badness-by-association inherited from the big tech / tech billionaires / ‘techbro’ cluster, and on the academic side from concern about algorithmic bias and the like. However, they seem to be failing at extrapolation. “AI bad” gets misgeneralized into skepticism about current and future AI capabilities.
Left-marxist people seem to be thinking a bit more clearly about this (ie extrapolating, applying any economic model at all, looking a bit into the tech). See an example here, or a summary (EDIT 2025-02-08: of the same example piece) here. However, the labs are based in the US, a country where associating with marxists is a very bad idea if you want your policies to get implemented.
These two leftist stances are mostly orthogonal to concerns about AI x-risk and catastrophic misuse. However, a lot of activists believe that the public’s attention is zero-sum. I suspect that is the main reason coalition-building with the preceding two groups has not happened much. However, I think it is still possible.
About the American right: some actors have largely succeeded in marrying China-hawkism with AI-boosterism. I expect this association to be very sticky, but it may be counteracted by reactionary impulses coming from spooked cultural conservatives.
This point is interesting. I almost wonder if it’s better to not argue against this. If we argue against it, maybe the left gets attached to this position, and becomes slower to update even as unemployment increases.
I think that the appropriate medium-term fit for the movement will be with organised labour (whether left or right!), as I’ve said before here. The economic impacts are not currently strong enough to have been felt in the unemployment rate, particularly since anti-inflationary policies typically prop up the employment rate a bit. But they will presumably be felt soon, and the natural home for those affected will be in the labour movement, which despite its currently weakened state will always be bigger and more mobile than, say, PauseAI.
(Specifically in tech, where I have more experience in labour organising, the largest political contingent among the workers has always been on the labour left. For example, [Bernie Sanders was far and away the most donated to candidate among big tech employees in 2020](https://www.theguardian.com/us-news/2020/mar/02/election-2020-tech-workers-donations-bernie-sanders).)
In that world, the best thing EAs can do is support that movement. Not necessarily explicitly or directly—I can see a world where Open Phil lobbies to strengthen the U.S. NLRB and overturn key Supreme Court decisions such as Janus. But, such a move will be perceived as highly political, and I wonder if the allergy to labour-left politics within EA precludes it.