Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
Open Phil’s funding interests and priorities and constraints have drastically changed in the last year or two. I agree they funded many things like this in the past.
A somewhat relevant article that I discovered while researching this: Longtermists Are Pushing a New Cold War With China—Jacobin
The Biden administration’s decision, in October of last year, to impose drastic export controls on semiconductors, stands as one of its most substantial policy changes so far. As Jacobin‘s Branko Marcetic wrote at the time, the controls were likely the first shot in a new economic Cold War between the United States and China, in which both superpowers (not to mention the rest of the world) will feel the hurt for years or decades, if not permanently.
[...]
The idea behind the policy, however, did not emerge from the ether. Three years before the current administration issued the rule, Congress was already receiving extensive testimony in favor of something much like it. The lengthy 2019 report from the National Security Commission on Artificial Intelligence suggests unambiguously that the “United States should commit to a strategy to stay at least two generations ahead of China in state-of-the-art microelectronics” and
The commission report makes repeated references to the risks posed by AI development in “authoritarian” regimes like China’s, predicting dire consequences as compared with similar research and development carried out under the auspices of liberal democracy. (Its hand-wringing in particular about AI-powered, authoritarian Chinese surveillance is ironic, as it also ominously exhorts, “The [US] Intelligence Community (IC) should adopt and integrate AI-enabled capabilities across all aspects of its work, from collection to analysis.”)
These emphases on the dangers of morally misinformed AI are no accident. The commission head was Eric Schmidt, tech billionaire and contributor to Future Forward, whose philanthropic venture Schmidt Futures has both deep ties with the longtermist community and a record of shady influence over the White House on science policy. Schmidt himself has voiced measured concern about AI safety, albeit tinged with optimism, opining that “doomsday scenarios” of AI run amok deserve “thoughtful consideration.” He has also coauthored a book on the future risks of AI, with no lesser an expert on morally unchecked threats to human life than notorious war criminal Henry Kissinger.
Also of note is commission member Jason Matheny, CEO of the RAND Corporation. Matheny is an alum of the longtermist Future of Humanity Institute (FHI) at the University of Oxford, who has claimed existential risk and machine intelligence are more dangerous than any historical pandemics and “a neglected topic in both the scientific and governmental communities, but it’s hard to think of a topic more important than human survival.” This commission report was not his last testimony to Congress on the subject, either: in September 2020, he would individually speak before the House Budget Committee urging “multilateral export controls on the semiconductor manufacturing equipment needed to produce advanced chips,” the better to preserve American dominance in AI.
Congressional testimony and his position at the RAND Corporation, moreover, were not Matheny’s only channels for influencing US policy on the matter. In 2021 and 2022, he served in the White House’s Office of Science and Technology Policy (OSTP) as deputy assistant to the president for technology and national security and as deputy director for national security (the head of the OSTP national security division). As a senior figure in the Office — to which Biden has granted “unprecedented access and power” — advice on policies like the October export controls would have fallen squarely within his professional mandate.
The most significant restrictions advocates (aside from Matheny) to emerge from CSET, however, have been Saif Khan and Kevin Wolf. The former is an alum from the Center and, since April 2021, the director for technology and national security at the White House National Security Council. The latter has been a senior fellow at CSET since February 2022 and has a long history of service in and connections with US export policy. He served as assistant secretary of commerce for export administration from 2010–17 (among other work in the field, both private and public), and his extensive familiarity with the US export regulation system would be valuable to anyone aspiring to influence policy on the subject. Both would, before and after October, champion the semiconductor controls.
At CSET, Khan published repeatedly on the topic, time and again calling for the United States to implement semiconductor export controls to curb Chinese progress on AI. In March 2021, he testified before the Senate, arguing that the United States must impose such controls “to ensure that democracies lead in advanced chips and that they are used for good.” (Paradoxically, in the same breath the address calls on the United States to both “identify opportunities to collaborate with competitors, including China, to build confidence and avoid races to the bottom” and to “tightly control exports of American technology to human rights abusers,” such as… China.)
Among Khan’s coauthors was aforementioned former congressional hopeful and longtermist Carrick Flynn, previously assistant director of the Center for the Governance of AI at FHI. Flynn himself individually authored a CSET issue brief, “Recommendations on Export Controls for Artificial Intelligence,” in February 2020. The brief, unsurprisingly, argues for tightened semiconductor export regulation much like Khan and Matheny.
This February, Wolf too provided a congressional address on “Advancing National Security and Foreign Policy Through Sanctions, Export Controls, and Other Economic Tools,” praising the October controls and urging further policy in the same vein. In it, he claims knowledge of the specific motivations of the controls’ writers:
BIS did not rely on ECRA’s emerging and foundational technology provisions when publishing this rule so that it would not need to seek public comments before publishing it.
These motivations also clearly included exactly the sorts of AI concerns Matheny, Khan, Flynn, and other longtermists had long raised in this connection. In its background summary, the text of one rule explicitly links the controls with hopes of retarding China’s AI development. Using language that could easily have been ripped from a CSET paper on the topic, the summary warns that “‘supercomputers’ are being used by the PRC to improve calculations in weapons design and testing including for WMD, such as nuclear weapons, hypersonics and other advanced missile systems, and to analyze battlefield effects,” as well as bolster citizen surveillance.
[...]
Longtermists, in short, have since at least 2019 exerted a strong influence over what would become the Biden White House’s October 2022 semiconductor export rules. If the policy is not itself the direct product of institutional longtermists, it at the very least bears the stamp of their enthusiastic approval and close monitoring.
Just as it would be a mistake to restrict interest in longtermism’s political ambitions exclusively to election campaigns, it would be shortsighted to treat its work on semiconductor infrastructure as a one-off incident. Khan and Matheny, among others, remain in positions of considerable influence, and have demonstrated a commitment to bringing longtermist concerns to bear on matters of high policy. The policy sophistication, political reach, and fresh-faced enthusiasm on display in its semiconductor export maneuvering should earn the AI doomsday lobby its fair share of critical attention in the years to come.
The article seems quite biased to me, but I do think some of the basics here make sense and match with things I have heard (but also, some of it seems wrong).
I have no idea what you are advocating for here. I have no inherent interest in trying to convince people that AGI is likely powerful, but it does seem likely true. Should I lie to people?
Many have chosen the path of keeping their beliefs to themselves. My guess is that wasn’t very helpful as the “imminent and powerful” part is kind of obvious as it starts happening.
What is the predictable result here? What is the counterfactual? How does anything better happen if you don’t say anything, and why are you falsely claiming that it’s been consensus that it’s a good idea to publicly talk about the power and capabilities of AI systems? A substantial fraction of the AI safety movement did not do this, and indeed strongly advocated against (again, I think mistakenly), so even if you assign blame, you obviously can’t assign blame uniformly.
Yep, the Lightspeed Grants table is part of the SFF table! I also think we should have published our own table, but it seemed lower priority after it was included in the SFF one.
We might also release a Lightspeed Grants retrospective soon.
Cool, I might just be remembering that one instance.
IIRC didn’t you somewhat frequently remove sections if the org objected because you didn’t have enough time to engage with them? (which I think was reasonably costly)
The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.
Of course no one likes a symmetric arms race, but the question is did people favor the “quickly establish overwhelming dominance towards China by investing heavily in AI” or the “try to negotiate with China and not set an example of racing towards AGI” strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it’s a quite divisive topic).
To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent “AI Security Forum” in Vegas, many x-risk concerned people expressed very hawkish opinions.
Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.
I think a non-trivial fraction of Aschenbrenner’s influence as well as intellectual growth is due to us and the core EA/AI-Safety ideas, yeah. I doubt he would have written it if the extended community didn’t exist, and if he wasn’t mentored by Holden, etc.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders “basically agreed with the China part of situational awareness”.
Again, people should really take this with a double-dose of salt, I am personally at like 50⁄50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn’t seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn’t result in endorsing a “Manhattan project to AGI”, though the rumors that I have heard did sound like they would endorse that)
Less rumor-based, I also know that Dario has historically been very hawkish, and “needing to beat China” was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true.
Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn’t seem like they would push back on it that much. My guess is “we” are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.
(I think the issue with Leopold is somewhat precisely that he seems to be quite politically savvy in a way that seems likely to make him a deca-multi-millionaire and politically influental, possibly at the cost of all of humanity. I agree Eliezer is not the best presenter, but his error modes are clearly enormously different)
Sure, my guess is OP gets around 50%[1] of the credit for that and GV is about 20% of the funding in the pool, making the remaining portion a ~$10M/yr grant ($20M/yr for 4 years of non-GV funding[2]). GV gives out ~$600M[3] grants per year recommended by OP, so to get to >5% you would need the equivalent of 3 projects of this size per year, which I haven’t seen (and don’t currently think exist).
Even at 100% credit, which seems like a big stretch, my guess is you don’t get over 5%.
To substantially change the implications of my sentence I think you need to get closer to 10%, which I think seems implausible from my viewpoint. It seems pretty clear the right number is around 95% (and IMO it’s bad form given that to just respond with a “this was never true” when it’s clearly and obviously been true in some past years, and it’s at the very least very close to true this year).
- ^
Mostly chosen for schelling-ness. I can imagine it being higher or lower. It seems like lots of other people outside of OP have been involved, and the choice of area seems heavily determined by what OP could get buy-in for from other funders, seeming somewhat more constrained than other grants, so I think a lower number seems more reasonable.
- ^
I have also learned to really not count your chickens before they are hatched with projects like this, so I think one should discount this funding by an expected 20-30% for a 4-year project like this, since funders frequently drop out and leadership changes, but we can ignore that for now
- ^
- ^
Sorry, just a typo!
Re: “nothing has changed in the last year.” No, a lot has changed, but my quick-take post wasn’t about “what has changed,” it was about “correcting some misconceptions I’m encountering.”
Makes sense. I think it’s easy to point out ways things are off, but in this case, IMO the most important thing that needs to happen in the funding ecosystem is people grappling with the huge changes that have occurred, and I think a lot of OP communication has been actively pushing back on that (not necessarily intentionally, I just think it’s a tempting and recurring error mode for established institutions to react to people freaking out with a “calm down” attitude, even when that’s inappropriate, cf. CDC and pandemics and many past instances of similar dynamics)
In particular, I am confident the majority of readers of your original comment interpreted what you said as meaning that GV has no substantial dispreference for right-of-center grants, which I think was substantially harmful to the epistemic landscape (though I am glad that further prodding by me and Jason cleared that up).
I’ll note that we’ve consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding.
I don’t currently believe this, and think you are mostly not exposed to most people who could be doing good work in the space (which is downstream of a bunch of other choices OP and GV made), and also overestimate the degree to which OP is helpful in getting the relevant projects funding (I know of 1-2 projects in this space which did ultimately get funding, where OP was a bit involved, but my sense is was overall slightly anti-helpful).
Re: “De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing.” This isn’t true, including specifically for my team (“AI governance and policy”).
I would take bets on this! It is of course important to assess counterfactualness of recommendations from OP. If you recommend a grant a funder would have made anyways, it doesn’t make any sense to count that as something OP “influenced”.
With that adjustment, I would take bets that more than 90% of influence-adjusted grants from OP in 2024 will have been made by GV (I don’t think it’s true in “AI governance and policy” where I can imagine it being substantially lower, I have much less visibility into that domain. My median for all of OP is 95%, but that doesn’t imply my betting odds, since I want at least a bit of profit margin).
Happy to refer to some trusted third-party arbiter for adjudicating.
I also don’t think this was ever true: “One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV.” There’s plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.
I used the double negative here very intentionally. Funding recommendations don’t get made by majority vote, and there isn’t such a thing as “the Open Phil view” on a grant, but up until 2023 I had long and intense conversations with staff at OP who said that it would be very weird and extraordinary if OP rejected a grant that most of its staff considered substantially more cost-effective than your average grant.
That of course stopped being true recently (and I also think past OP staff overstated a bit the degree to which it was true previously, but it sure was something that OP staff actively reached out to me about and claimed was true when I disputed it). You saying “this was never true” is in direct contradiction to statements made by OP staff to me up until late 2023 (bar what people claimed were very rare exceptions).
I don’t think something as strong as this, but I did think at the time that the work on export controls was bad and likely to exacerbate arms race dynamics, and continue to believe this (and the celebration of export controls as a great success of the EA policy efforts was one of the things that caused me to update on future EA-driven AI policy efforts probably being net harmful, though FTX played a bigger role).