Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community /â EAs I know IRL were much bigger on âwe need to beat Chinaâ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an âoverwhelming majority of EAs involved in AI safetyâ disagree with it even now.
So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...
This is the most common question I get on AI safety posts: why isnât the rationalist /â EA /â AI safety movement doing this more? Itâs a great question, and itâs one that the movement asks itself a lot...
Still, most people arenât doing this. Why not?
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
The biggest problem is China. US regulations donât affect China. China says that AI leadership is a cornerstone of their national securityâboth as a massive boon to their surveillance state, and because it would boost their national pride if they could beat America in something so cutting-edge.
So the real question is: which would we prefer? OpenAI gets superintelligence in 2040? Or Facebook gets superintelligence in 2044? Or China gets superintelligence in 2048?
Might we be able to strike an agreement with China on AI, much as countries have previously made arms control or climate change agreements? This is . . . not technically prevented by the laws of physics, but it sounds really hard. When I bring this challenge up with AI policy people, they ask âHarder than the technical AI alignment problem?â Okay, fine, you win this one.
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? Itâs right there in the section that most explicitly talks about policy.
Scottâs last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldnât race). But I can see how a politician reading this article wouldnât see that implication.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community /â EAs I know IRL were much bigger on âwe need to beat Chinaâ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an âoverwhelming majority of EAs involved in AI safetyâ disagree with it even now.
Example from August 2022:
https://ââwww.astralcodexten.com/ââp/ââwhy-not-slow-ai-progress
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? Itâs right there in the section that most explicitly talks about policy.
Scottâs last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldnât race). But I can see how a politician reading this article wouldnât see that implication.