Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it’s a good chance it’s my fault (Sorry!).
Habryka
IIRC didn’t you somewhat frequently remove sections if the org objected because you didn’t have enough time to engage with them? (which I think was reasonably costly)
The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.
Of course no one likes a symmetric arms race, but the question is did people favor the “quickly establish overwhelming dominance towards China by investing heavily in AI” or the “try to negotiate with China and not set an example of racing towards AGI” strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it’s a quite divisive topic).
To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent “AI Security Forum” in Vegas, many x-risk concerned people expressed very hawkish opinions.
Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.
I think a non-trivial fraction of Aschenbrenner’s influence as well as intellectual growth is due to us and the core EA/AI-Safety ideas, yeah. I doubt he would have written it if the extended community didn’t exist, and if he wasn’t mentored by Holden, etc.
I think most of those people believe that “having an AI aligned to ‘China’s values’” would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with “aligned AI” instead.
Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.
My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances.
But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).
In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders “basically agreed with the China part of situational awareness”.
Again, people should really take this with a double-dose of salt, I am personally at like 50⁄50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn’t seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn’t result in endorsing a “Manhattan project to AGI”, though the rumors that I have heard did sound like they would endorse that)
Less rumor-based, I also know that Dario has historically been very hawkish, and “needing to beat China” was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true.
Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn’t seem like they would push back on it that much. My guess is “we” are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.
(I think the issue with Leopold is somewhat precisely that he seems to be quite politically savvy in a way that seems likely to make him a deca-multi-millionaire and politically influental, possibly at the cost of all of humanity. I agree Eliezer is not the best presenter, but his error modes are clearly enormously different)
Sure, my guess is OP gets around 50%[1] of the credit for that and GV is about 20% of the funding in the pool, making the remaining portion a ~$10M/yr grant ($20M/yr for 4 years of non-GV funding[2]). GV gives out ~$600M[3] grants per year recommended by OP, so to get to >5% you would need the equivalent of 3 projects of this size per year, which I haven’t seen (and don’t currently think exist).
Even at 100% credit, which seems like a big stretch, my guess is you don’t get over 5%.
To substantially change the implications of my sentence I think you need to get closer to 10%, which I think seems implausible from my viewpoint. It seems pretty clear the right number is around 95% (and IMO it’s bad form given that to just respond with a “this was never true” when it’s clearly and obviously been true in some past years, and it’s at the very least very close to true this year).
- ^
Mostly chosen for schelling-ness. I can imagine it being higher or lower. It seems like lots of other people outside of OP have been involved, and the choice of area seems heavily determined by what OP could get buy-in for from other funders, seeming somewhat more constrained than other grants, so I think a lower number seems more reasonable.
- ^
I have also learned to really not count your chickens before they are hatched with projects like this, so I think one should discount this funding by an expected 20-30% for a 4-year project like this, since funders frequently drop out and leadership changes, but we can ignore that for now
- ^
- ^
Sorry, just a typo!
Re: “nothing has changed in the last year.” No, a lot has changed, but my quick-take post wasn’t about “what has changed,” it was about “correcting some misconceptions I’m encountering.”
Makes sense. I think it’s easy to point out ways things are off, but in this case, IMO the most important thing that needs to happen in the funding ecosystem is people grappling with the huge changes that have occurred, and I think a lot of OP communication has been actively pushing back on that (not necessarily intentionally, I just think it’s a tempting and recurring error mode for established institutions to react to people freaking out with a “calm down” attitude, even when that’s inappropriate, cf. CDC and pandemics and many past instances of similar dynamics)
In particular, I am confident the majority of readers of your original comment interpreted what you said as meaning that GV has no substantial dispreference for right-of-center grants, which I think was substantially harmful to the epistemic landscape (though I am glad that further prodding by me and Jason cleared that up).
I’ll note that we’ve consistently been able to help such work find funding, because (as noted here), the bottleneck is available right-of-center opportunities rather than available funding.
I don’t currently believe this, and think you are mostly not exposed to most people who could be doing good work in the space (which is downstream of a bunch of other choices OP and GV made), and also overestimate the degree to which OP is helpful in getting the relevant projects funding (I know of 1-2 projects in this space which did ultimately get funding, where OP was a bit involved, but my sense is was overall slightly anti-helpful).
Re: “De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing.” This isn’t true, including specifically for my team (“AI governance and policy”).
I would take bets on this! It is of course important to assess counterfactualness of recommendations from OP. If you recommend a grant a funder would have made anyways, it doesn’t make any sense to count that as something OP “influenced”.
With that adjustment, I would take bets that more than 90% of influence-adjusted grants from OP in 2024 will have been made by GV (I don’t think it’s true in “AI governance and policy” where I can imagine it being substantially lower, I have much less visibility into that domain. My median for all of OP is 95%, but that doesn’t imply my betting odds, since I want at least a bit of profit margin).
Happy to refer to some trusted third-party arbiter for adjudicating.
I also don’t think this was ever true: “One was also able to roughly assume that if OP decides to not recommend a grant to GV, that most OP staff do not think that grant would be more cost-effective than other grants referred to GV.” There’s plenty of internal disagreement even among the AI-focused staff about which grants are above our bar for recommending, and funding recommendation decisions have never been made by majority vote.
I used the double negative here very intentionally. Funding recommendations don’t get made by majority vote, and there isn’t such a thing as “the Open Phil view” on a grant, but up until 2023 I had long and intense conversations with staff at OP who said that it would be very weird and extraordinary if OP rejected a grant that most of its staff considered substantially more cost-effective than your average grant.
That of course stopped being true recently (and I also think past OP staff overstated a bit the degree to which it was true previously, but it sure was something that OP staff actively reached out to me about and claimed was true when I disputed it). You saying “this was never true” is in direct contradiction to statements made by OP staff to me up until late 2023 (bar what people claimed were very rare exceptions).
Due to what I understand to be trickiness in communicating Dustin’s relationship to PR risk to the EA community, there isn’t a ton of clarity on what things OP would fund via GV, but some guesses on stuff where I expect OP to be hesitant for PR-ish reasons, but which the LTFF would definitely consider:
A grant to Manifold Markets (who I expect Dustin would not be in favor of funding due to hosting certain right-leaning intellectuals at their conferences)
A grant to Nick Bostrom to work on FHI-ish stuff
A grant to a right-leaning AI think tank
A grant to rationalist community building, in as much as it would be effective for improving the long term future
A grant to work on digital sentience
Grants to various high-school programs like Atlas
AI pause or stop advocacy
Distributing copies of HPMoR to various promising people around the world
Again, there isn’t much clarity on what things OP might or might not fund via GV, but my current best guess is none of the things on the list above could currently get GV funding.
He generally doesn’t vote on any SFF grants (I don’t know why, but would be surprised if it’s because of trying to minimize conflicts of interest).
this conflict of interest is why Tallinn appoints others to make the actual grant decisions
(I don’t think this is particularly true. I think the reason why Jaan chooses to appoint others to make grant decisions are mostly unrelated to this.)
It is trivially available public information that what you are saying here is true. This isn’t something for which we need an investigative journalist, it’s something for which you just need basic Google skills:
Sure! I continued the same chat and gave it the query:
What fraction of these grants would meaningfully be described as “left of center” policy work? Apply the same standard that you applied with the right of center policy work
It’s response:
Certainly! To determine what fraction of these grants could meaningfully be described as “left-of-center” policy work in the United States, I’ll follow the same standard applied earlier for identifying “right-of-center” policy work:
Focus on U.S.-based organizations engaged in policy work that aligns with left-of-center positions.
Exclude international organizations or those not primarily engaged in policy advocacy.
Consider the political affiliations and policy areas traditionally associated with left-of-center ideologies.
Identifying Left-of-Center Policy Work Grants
After reviewing the list, the following grants meet these criteria:
YIMBY Law — General Support (2024)
Amount: $180,000
Date: June 2024
Focus Area: Land Use Reform
Description: YIMBY Law is a U.S.-based organization advocating for housing policy reforms to increase housing supply and affordability. Their work often aligns with progressive or left-of-center policy positions focused on reducing housing inequality.
YIMBY Action — Housing Advocacy in California (2024)
Amount: $120,000
Date: June 2024
Focus Area: Land Use Reform
Description: Similar to YIMBY Law, YIMBY Action promotes policies to address the housing crisis through advocacy and community organizing, aligning with left-of-center ideals on affordable housing.
Clinton Health Access Initiative — Effectiveness Improvements for Health Multilaterals
Amount: $250,000
Date: June 2024
Focus Area: Global Aid Policy
Description: The Clinton Health Access Initiative (CHAI) is affiliated with the Clinton Foundation, associated with Democratic Party figures. CHAI works on global health policies, a domain often linked with left-of-center advocacy for international aid and public health initiatives.
Clinton Health Access Initiative — Aid Efficacy Support
Amount: $743,775
Date: March 2024
Focus Area: Global Aid Policy
Description: Another grant to CHAI supporting policy work to improve the effectiveness of global health aid.
Plant Based Foods Association — Advocacy and Corporate Engagement
Amount: $2,500,000
Date: January 2024
Focus Area: Alternatives to Animal Products
Description: The PBFA advocates for policies supporting plant-based foods, including lobbying against restrictive labeling laws and promoting sustainable agriculture—positions often associated with left-of-center environmental and public health policies.
Plant Based Foods Institute — International Regranting Against Labeling Laws (2024)
Amount: $257,000
Date: January 2024
Focus Area: Alternatives to Animal Products
Description: This grant supports advocacy against laws that limit labeling of plant-based products, aligning with left-of-center views on consumer rights and environmental sustainability.
Harvard Animal Law & Policy Clinic — Global Farmed Animal Benchmark
Amount: $350,000
Date: June 2024
Focus Area: Farm Animal Welfare
Description: Engaged in policy work to improve farm animal welfare, a cause typically championed by left-of-center groups advocating for animal rights and ethical treatment.
Albright Stonebridge Group — Korean Global Aid Advocacy
Amount: $1,000,000
Date: January 2024
Focus Area: Global Aid Policy
Description: Co-founded by former Secretary of State Madeleine Albright, a Democrat, the firm engages in policy advocacy consistent with left-of-center perspectives on international cooperation and aid.
Peterson Institute for International Economics — Indian Economic Policy Reform (2024)
Amount: $680,000
Date: April 2024
Focus Area: Global Health & Development
Description: A U.S.-based think tank that leans centrist to center-left, focusing on economic policy research and advocacy.
Government Relations Group — Global Health Aid Study Tour in Ghana
Amount: $250,000
Date: May 2024
Focus Area: Global Aid Policy
Description: Engages in policy advocacy to promote global health aid, aligning with left-of-center values on international development and public health.
Government Relations Group — Japanese Support for NTD Elimination
Amount: $40,200
Date: June 2024
Focus Area: Global Aid Policy
Description: Advocacy work to secure support for neglected tropical diseases elimination, consistent with left-of-center emphasis on global health equity.
Calculating the Fraction
Total Number of Grants Listed: 151
Number of Grants Identified as Left-of-Center Policy Work in the U.S.: 11
Fraction:
Fraction=Number of Left-of-Center Policy Work GrantsTotal Number of Grants=11151≈0.0728 or 7.28%\text{Fraction} = \frac{\text{Number of Left-of-Center Policy Work Grants}}{\text{Total Number of Grants}} = \frac{11}{151} \approx 0.0728 \text{ or } 7.28\%Fraction=Total Number of GrantsNumber of Left-of-Center Policy Work Grants=15111≈0.0728 or 7.28%
Weighted by Grant Amount
To provide a weighted perspective based on grant sizes:
Step 1: Sum Total Amount of All Grants
From the previous calculation, the total funding amount is approximately $161,888,000.
Step 2: Sum Total Amount of Left-of-Center Policy Work Grants
Adding up the amounts for the identified grants:
YIMBY Law — $180,000
YIMBY Action — $120,000
Clinton Health Access Initiative — $250,000
Clinton Health Access Initiative — $743,775
Plant Based Foods Association — $2,500,000
Plant Based Foods Institute — $257,000
Harvard Animal Law & Policy Clinic — $350,000
Albright Stonebridge Group — $1,000,000
Peterson Institute for International Economics — $680,000
Government Relations Group — $250,000
Government Relations Group — $40,200
Total Amount for Left-of-Center Policy Work Grants:
$180,000+$120,000+$250,000+$743,775+$2,500,000+$257,000+$350,000+$1,000,000+$680,000+$250,000+$40,200=$6,370,975=$180,000+$120,000+$250,000+$743,775+$2,500,000+$257,000+$350,000+$1,000,000+$680,000+$250,000+$40,200$6,370,975
Step 3: Calculate Weighted Ratio
Weighted Ratio=$6,370,975$161,888,000×100%≈3.94%\text{Weighted Ratio} = \frac{\$6,370,975}{\$161,888,000} \times 100\% \approx 3.94\%Weighted Ratio=$161,888,000$6,370,975×100%≈3.94%
Final Answer
Approximately 7% of the grants are directed toward left-of-center policy work in the U.S. when considering the number of grants.
Approximately 4% of the total funding amount is allocated to left-of-center policy work in the U.S. when weighted by grant size.
Summary
Number of Grants: 11 out of 151 grants (~7%)
Total Funding for Left-of-Center Policy Work: $6,370,975
Weighted Fraction of Total Funding: ~4%
Conclusion
Applying the same standard as before, a small fraction of Open Philanthropy’s grants could be meaningfully described as supporting left-of-center policy work in the United States. This represents about 7% of the grants by number and 4% when weighted by grant amount.
Note: These calculations are based solely on the information provided in the grant list. The categorization of organizations as “left-of-center” is based on typical associations with political ideologies in the U.S. context. Some organizations may engage in non-partisan work or have a broader focus that spans the political spectrum.
Honestly, this response doesn’t seem great, and I might ask it to look more closely in to the AI-related grants (some of which are I think left leaning in a way o1 could figure out), but I have to head out. Someone else could also try to reproduce it (you can download the spreadsheets of all OP grants from the OP website).
Cool, I might just be remembering that one instance.