It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there’s a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong.
(Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.)
So maybe I should say the congressional commission should’ve spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would’ve been nice.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on ‘we need to beat China’ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an ‘overwhelming majority of EAs involved in AI safety’ disagree with it even now.
So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...
This is the most common question I get on AI safety posts: why isn’t the rationalist / EA / AI safety movement doing this more? It’s a great question, and it’s one that the movement asks itself a lot...
Still, most people aren’t doing this. Why not?
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
The biggest problem is China. US regulations don’t affect China. China says that AI leadership is a cornerstone of their national security—both as a massive boon to their surveillance state, and because it would boost their national pride if they could beat America in something so cutting-edge.
So the real question is: which would we prefer? OpenAI gets superintelligence in 2040? Or Facebook gets superintelligence in 2044? Or China gets superintelligence in 2048?
Might we be able to strike an agreement with China on AI, much as countries have previously made arms control or climate change agreements? This is . . . not technically prevented by the laws of physics, but it sounds really hard. When I bring this challenge up with AI policy people, they ask “Harder than the technical AI alignment problem?” Okay, fine, you win this one.
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It’s right there in the section that most explicitly talks about policy.
Scott’s last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn’t race). But I can see how a politician reading this article wouldn’t see that implication.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
(though they are mostly premised on alignment being relatively easy, which seems very wrong to me)
Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there’s only a 5% chance that alignment is hard.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there’s a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong.
(Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.)
So maybe I should say the congressional commission should’ve spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would’ve been nice.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on ‘we need to beat China’ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an ‘overwhelming majority of EAs involved in AI safety’ disagree with it even now.
Example from August 2022:
https://www.astralcodexten.com/p/why-not-slow-ai-progress
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It’s right there in the section that most explicitly talks about policy.
Scott’s last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn’t race). But I can see how a politician reading this article wouldn’t see that implication.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there’s only a 5% chance that alignment is hard.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.