I don’t want to claim all EAs believe the same things, but if the congressional commission had listened to what you might call the “central” EA position, it would not be recommending an arms race because it would be much more concerned about misalignment risk. The overwhelming majority of EAs involved in AI safety seem to agree that arms races are bad and misalignment risk is the biggest concern (within AI safety). So if anything this is a problem of the commission not listening to EAs, or at least selectively listening to only the parts they want to hear.
In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders “basically agreed with the China part of situational awareness”.
Again, people should really take this with a double-dose of salt, I am personally at like 50⁄50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn’t seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn’t result in endorsing a “Manhattan project to AGI”, though the rumors that I have heard did sound like they would endorse that)
Less rumor-based, I also know that Dario has historically been very hawkish, and “needing to beat China” was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true.
Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn’t seem like they would push back on it that much. My guess is “we” are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.
I looked thru the congressional commission report’s list of testimonies for plausibly EA-adjacent people. The only EA-adjacent org I saw was CSET, which had two testimonies (1, 2). From a brief skim, neither one looked clearly pro- or anti-arms race. They seemed vaguely pro-arms race on vibes but I didn’t see any claims that look like they were clearly encouraging an arms race—but like I said, I only briefly skimmed them, so I could have missed a lot.
This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.
The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.
Of course no one likes a symmetric arms race, but the question is did people favor the “quickly establish overwhelming dominance towards China by investing heavily in AI” or the “try to negotiate with China and not set an example of racing towards AGI” strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it’s a quite divisive topic).
To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent “AI Security Forum” in Vegas, many x-risk concerned people expressed very hawkish opinions.
Yeah re the export controls, I was trying to say “I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so” (though I used the word “ambiguous” because my impression was that some relevant people saw a pro of that work that it also mostly didn’t directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were “trying to establish overwhelming dominance over China” but not by “investing heavily in AI”.
It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there’s a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong.
(Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.)
So maybe I should say the congressional commission should’ve spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would’ve been nice.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on ‘we need to beat China’ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an ‘overwhelming majority of EAs involved in AI safety’ disagree with it even now.
So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...
This is the most common question I get on AI safety posts: why isn’t the rationalist / EA / AI safety movement doing this more? It’s a great question, and it’s one that the movement asks itself a lot...
Still, most people aren’t doing this. Why not?
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
The biggest problem is China. US regulations don’t affect China. China says that AI leadership is a cornerstone of their national security—both as a massive boon to their surveillance state, and because it would boost their national pride if they could beat America in something so cutting-edge.
So the real question is: which would we prefer? OpenAI gets superintelligence in 2040? Or Facebook gets superintelligence in 2044? Or China gets superintelligence in 2048?
Might we be able to strike an agreement with China on AI, much as countries have previously made arms control or climate change agreements? This is . . . not technically prevented by the laws of physics, but it sounds really hard. When I bring this challenge up with AI policy people, they ask “Harder than the technical AI alignment problem?” Okay, fine, you win this one.
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It’s right there in the section that most explicitly talks about policy.
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowsky’s “don’t fight over which monkey gets to eat the poison banana first” framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
That sounds plausible. I do think of ACX as much more ‘accelerationist’ than the doomer circles, for lack of a better term. Here’s a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scott’s position.
Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitarianism + illiberalism+mobocracy, fertility collapse and dysgenics will impoverish the world and accelerate its decaying institutional quality. I don’t spend much time worrying about any of these, because I think they’ll take a few generations to reach crisis level, and I expect technology to flip the gameboard well before then. But if we ban all gameboard-flipping technologies (the only other one I know is genetic enhancement, which is even more bannable), then we do end up with bioweapon catastrophe or social collapse. I’ve said before I think there’s a ~20% chance of AI destroying the world. But if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela. That doesn’t mean I have to support AI accelerationism because 20% is smaller than 50%. Short, carefully-tailored pauses could improve the chance of AI going well by a lot, without increasing the risk of social collapse too much. But it’s something on my mind.
Scott’s last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn’t race). But I can see how a politician reading this article wouldn’t see that implication.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
(though they are mostly premised on alignment being relatively easy, which seems very wrong to me)
Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there’s only a 5% chance that alignment is hard.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
The Biden administration’s decision, in October of last year, to impose drastic export controls on semiconductors, stands as one of its most substantial policy changes so far. As Jacobin‘s Branko Marcetic wrote at the time, the controls were likely the first shot in a new economic Cold War between the United States and China, in which both superpowers (not to mention the rest of the world) will feel the hurt for years or decades, if not permanently.
[...]
The idea behind the policy, however, did not emerge from the ether. Three years before the current administration issued the rule, Congress was already receiving extensive testimony in favor of something much like it. The lengthy 2019 report from the National Security Commission on Artificial Intelligence suggests unambiguously that the “United States should commit to a strategy to stay at least two generations ahead of China in state-of-the-art microelectronics” and
The commission report makes repeated references to the risks posed by AI development in “authoritarian” regimes like China’s, predicting dire consequences as compared with similar research and development carried out under the auspices of liberal democracy. (Its hand-wringing in particular about AI-powered, authoritarian Chinese surveillance is ironic, as it also ominously exhorts, “The [US] Intelligence Community (IC) should adopt and integrate AI-enabled capabilities across all aspects of its work, from collection to analysis.”)
These emphases on the dangers of morally misinformed AI are no accident. The commission head was Eric Schmidt, tech billionaire and contributor to Future Forward, whose philanthropic venture Schmidt Futures has both deep ties with the longtermist community and a record of shady influence over the White House on science policy. Schmidt himself has voiced measured concern about AI safety, albeit tinged with optimism, opining that “doomsday scenarios” of AI run amok deserve “thoughtful consideration.” He has also coauthored a book on the future risks of AI, with no lesser an expert on morally unchecked threats to human life than notorious war criminal Henry Kissinger.
Also of note is commission member Jason Matheny, CEO of the RAND Corporation. Matheny is an alum of the longtermist Future of Humanity Institute (FHI) at the University of Oxford, who has claimed existential risk and machine intelligence are more dangerous than any historical pandemics and “a neglected topic in both the scientific and governmental communities, but it’s hard to think of a topic more important than human survival.” This commission report was not his last testimony to Congress on the subject, either: in September 2020, he would individually speak before the House Budget Committee urging “multilateral export controls on the semiconductor manufacturing equipment needed to produce advanced chips,” the better to preserve American dominance in AI.
Congressional testimony and his position at the RAND Corporation, moreover, were not Matheny’s only channels for influencing US policy on the matter. In 2021 and 2022, he served in the White House’s Office of Science and Technology Policy (OSTP) as deputy assistant to the president for technology and national security and as deputy director for national security (the head of the OSTP national security division). As a senior figure in the Office — to which Biden has granted “unprecedented access and power” — advice on policies like the October export controls would have fallen squarely within his professional mandate.
The most significant restrictions advocates (aside from Matheny) to emerge from CSET, however, have been Saif Khan and Kevin Wolf. The former is an alum from the Center and, since April 2021, the director for technology and national security at the White House National Security Council. The latter has been a senior fellow at CSET since February 2022 and has a long history of service in and connections with US export policy. He served as assistant secretary of commerce for export administration from 2010–17 (among other work in the field, both private and public), and his extensive familiarity with the US export regulation system would be valuable to anyone aspiring to influence policy on the subject. Both would, before and after October, champion the semiconductor controls.
At CSET, Khan published repeatedly on the topic, time and again calling for the United States to implement semiconductor export controls to curb Chinese progress on AI. In March 2021, he testified before the Senate, arguing that the United States must impose such controls “to ensure that democracies lead in advanced chips and that they are used for good.” (Paradoxically, in the same breath the address calls on the United States to both “identify opportunities to collaborate with competitors, including China, to build confidence and avoid races to the bottom” and to “tightly control exports of American technology to human rights abusers,” such as… China.)
Among Khan’s coauthors was aforementioned former congressional hopeful and longtermist Carrick Flynn, previously assistant director of the Center for the Governance of AI at FHI. Flynn himself individually authored a CSET issue brief, “Recommendations on Export Controls for Artificial Intelligence,” in February 2020. The brief, unsurprisingly, argues for tightened semiconductor export regulation much like Khan and Matheny.
This February, Wolf too provided a congressional address on “Advancing National Security and Foreign Policy Through Sanctions, Export Controls, and Other Economic Tools,” praising the October controls and urging further policy in the same vein. In it, he claims knowledge of the specific motivations of the controls’ writers:
BIS did not rely on ECRA’s emerging and foundational technology provisions when publishing this rule so that it would not need to seek public comments before publishing it.
These motivations also clearly included exactly the sorts of AI concerns Matheny, Khan, Flynn, and other longtermists had long raised in this connection. In its background summary, the text of one rule explicitly links the controls with hopes of retarding China’s AI development. Using language that could easily have been ripped from a CSET paper on the topic, the summary warns that “‘supercomputers’ are being used by the PRC to improve calculations in weapons design and testing including for WMD, such as nuclear weapons, hypersonics and other advanced missile systems, and to analyze battlefield effects,” as well as bolster citizen surveillance.
[...]
Longtermists, in short, have since at least 2019 exerted a strong influence over what would become the Biden White House’s October 2022 semiconductor export rules. If the policy is not itself the direct product of institutional longtermists, it at the very least bears the stamp of their enthusiastic approval and close monitoring.
Just as it would be a mistake to restrict interest in longtermism’s political ambitions exclusively to election campaigns, it would be shortsighted to treat its work on semiconductor infrastructure as a one-off incident. Khan and Matheny, among others, remain in positions of considerable influence, and have demonstrated a commitment to bringing longtermist concerns to bear on matters of high policy. The policy sophistication, political reach, and fresh-faced enthusiasm on display in its semiconductor export maneuvering should earn the AI doomsday lobby its fair share of critical attention in the years to come.
The article seems quite biased to me, but I do think some of the basics here make sense and match with things I have heard (but also, some of it seems wrong).
Maybe instead of “where people actually listen to us” it’s more like “EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn’t exist.”
I don’t want to claim all EAs believe the same things, but if the congressional commission had listened to what you might call the “central” EA position, it would not be recommending an arms race because it would be much more concerned about misalignment risk. The overwhelming majority of EAs involved in AI safety seem to agree that arms races are bad and misalignment risk is the biggest concern (within AI safety). So if anything this is a problem of the commission not listening to EAs, or at least selectively listening to only the parts they want to hear.
In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders “basically agreed with the China part of situational awareness”.
Again, people should really take this with a double-dose of salt, I am personally at like 50⁄50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn’t seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn’t result in endorsing a “Manhattan project to AGI”, though the rumors that I have heard did sound like they would endorse that)
Less rumor-based, I also know that Dario has historically been very hawkish, and “needing to beat China” was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true.
Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn’t seem like they would push back on it that much. My guess is “we” are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.
I looked thru the congressional commission report’s list of testimonies for plausibly EA-adjacent people. The only EA-adjacent org I saw was CSET, which had two testimonies (1, 2). From a brief skim, neither one looked clearly pro- or anti-arms race. They seemed vaguely pro-arms race on vibes but I didn’t see any claims that look like they were clearly encouraging an arms race—but like I said, I only briefly skimmed them, so I could have missed a lot.
This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.
The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.
Of course no one likes a symmetric arms race, but the question is did people favor the “quickly establish overwhelming dominance towards China by investing heavily in AI” or the “try to negotiate with China and not set an example of racing towards AGI” strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it’s a quite divisive topic).
To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent “AI Security Forum” in Vegas, many x-risk concerned people expressed very hawkish opinions.
Yeah re the export controls, I was trying to say “I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so” (though I used the word “ambiguous” because my impression was that some relevant people saw a pro of that work that it also mostly didn’t directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were “trying to establish overwhelming dominance over China” but not by “investing heavily in AI”.
It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there’s a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong.
(Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.)
So maybe I should say the congressional commission should’ve spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would’ve been nice.
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on ‘we need to beat China’ arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an ‘overwhelming majority of EAs involved in AI safety’ disagree with it even now.
Example from August 2022:
https://www.astralcodexten.com/p/why-not-slow-ai-progress
Later, talking about why attempting a regulatory approach to avoiding a race is futile:
I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It’s right there in the section that most explicitly talks about policy.
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowsky’s “don’t fight over which monkey gets to eat the poison banana first” framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
That sounds plausible. I do think of ACX as much more ‘accelerationist’ than the doomer circles, for lack of a better term. Here’s a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scott’s position.
https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate
Scott’s last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn’t race). But I can see how a politician reading this article wouldn’t see that implication.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there’s only a 5% chance that alignment is hard.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
I think that’s not a reasonable position to hold but I don’t know how to constructively argue against it in a short comment so I’ll just register my disagreement.
Like, presumably China’s values include humans existing and having mostly good experiences.
Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.
A somewhat relevant article that I discovered while researching this: Longtermists Are Pushing a New Cold War With China—Jacobin
The article seems quite biased to me, but I do think some of the basics here make sense and match with things I have heard (but also, some of it seems wrong).
Maybe instead of “where people actually listen to us” it’s more like “EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn’t exist.”
On that framing, I agree that that’s something that happens and that we should be able to anticipate will happen.