Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community /â EAs I know IRL were much bigger on âwe need to beat Chinaâ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an âoverwhelming majority of EAs involved in AI safetyâ disagree with it even now.
So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...
This is the most common question I get on AI safety posts: why isnât the rationalist /â EA /â AI safety movement doing this more? Itâs a great question, and itâs one that the movement asks itself a lot...
Still, most people arenât doing this. Why not?
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
The biggest problem is China. US regulations donât affect China. China says that AI leadership is a cornerstone of their national securityâboth as a massive boon to their surveillance state, and because it would boost their national pride if they could beat America in something so cutting-edge.
So the real question is: which would we prefer? OpenAI gets superintelligence in 2040? Or Facebook gets superintelligence in 2044? Or China gets superintelligence in 2048?
Might we be able to strike an agreement with China on AI, much as countries have previously made arms control or climate change agreements? This is . . . not technically prevented by the laws of physics, but it sounds really hard. When I bring this challenge up with AI policy people, they ask âHarder than the technical AI alignment problem?â Okay, fine, you win this one.
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? Itâs right there in the section that most explicitly talks about policy.
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowskyâs âdonât fight over which monkey gets to eat the poison banana firstâ framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
That sounds plausible. I do think of ACX as much more âaccelerationistâ than the doomer circles, for lack of a better term. Hereâs a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scottâs position.
Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism+mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I donât spend much time worrying about any of these, because I think theyâll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. Iâve said before I think thereâs a ~20% chance of AI destroying the world. But if we donât get AI, I think thereâs a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesnât mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But itâs something on my mind.
Scottâs last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldnât race). But I can see how a politician reading this article wouldnât see that implication.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community /â EAs I know IRL were much bigger on âwe need to beat Chinaâ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an âoverwhelming majority of EAs involved in AI safetyâ disagree with it even now.
Example from August 2022:
https://ââwww.astralcodexten.com/ââp/ââwhy-not-slow-ai-progress
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? Itâs right there in the section that most explicitly talks about policy.
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowskyâs âdonât fight over which monkey gets to eat the poison banana firstâ framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
That sounds plausible. I do think of ACX as much more âaccelerationistâ than the doomer circles, for lack of a better term. Hereâs a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scottâs position.
https://ââwww.astralcodexten.com/ââp/ââpause-for-thought-the-ai-pause-debate
Scottâs last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldnât race). But I can see how a politician reading this article wouldnât see that implication.