@Habryka has stated that Lightcone has been cut off from OpenPhil/GV funding; my understanding is that OP/GV/Dustin do not like the rationalism brand because it attracts right-coded folks. Many kinds of AI safety work also seem cut off from this funding; reposting a comment from Oli:
As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don’t think this is because of any COIs, it’s because Dustin is very active in the democratic party and doesn’t want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.
Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]
Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth’s work, or Wei Dai’s work, or Daniel Kokotajlo’s work, or Brian Tomasik’s work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]
I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]
In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has “good judgement” on public comms, and who isn’t the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn’t like, or that might strain Dustin’s relationships with others in any non-trivial way.
Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren’t the kind of person who gets the hint that this is how the game is played now.
And to provide some pushback on things you say, I think now that OPs bridges with OpenAI are thoroughly burned after the Sam firing drama, OP is pretty OK with people criticizing OpenAI (since what social capital is there left to protect here?). My sense is criticizing Anthropic is slightly risky, especially if you do it in a way that doesn’t signal what OP considers good judgement on maintaining and spending your social capital appropriately (i.e. telling them that they are harmful for the world, or should really stop, is bad, but doing a mixture of praise and criticism without taking any controversial top-level stance is fine), but mostly also isn’t the kind of thing that OP will totally freak out about. I think OP used to be really crazy about this, but now is a bit more reasonable, and it’s not the domain where OP’s relationship to reputation-management is causing the worst failures.
I think all of this is worse in the longtermist space, though I am not confident. At the present it wouldn’t surprise me very much if OP would defund a global health grantee because their CEO endorsed Trump for president, so I do think there is also a lot of distortion and skew there, but my sense is that it’s less, mostly because the field is much more professionalized and less political (though I don’t know how they think, for example, about funding on corporate campaign stuff which feels like it would be more political and invite more of these kinds of skewed considerations).
Also, to balance things, sometimes OP does things that seem genuinely good to me. The lead reduction fund stuff seems good, genuinely neglected, and I don’t see that many of these dynamics at play there (I do also genuinely care about it vastly less than OPs effect on AI Safety and Rationality things).
Also, Manifold, Manifund, and Manifest have never received OP funding—I think in the beginning we were too illegible for OP, and by the time we were more established and OP had hired a fulltime forecasting grantmaker, I would speculate that were seen as too much of a reputational risk given eg our speaker choices at Manifest.
@Habryka has stated that Lightcone has been cut off from OpenPhil/GV funding; my understanding is that OP/GV/Dustin do not like the rationalism brand because it attracts right-coded folks. Many kinds of AI safety work also seem cut off from this funding; reposting a comment from Oli:
Also, Manifold, Manifund, and Manifest have never received OP funding—I think in the beginning we were too illegible for OP, and by the time we were more established and OP had hired a fulltime forecasting grantmaker, I would speculate that were seen as too much of a reputational risk given eg our speaker choices at Manifest.