GCR capacity-building grantmaking and projects at Open Phil.
Eli Rose
However, some of the public stances he has taken make it difficult for grantmakers to associate themselves with him. Even if OP were otherwise very excited to fund AISC, it would be political suicide for them to do so. They can’t even get away with funding university clubs.
(I lead the GCR Capacity Building team at Open Phil and have evaluated AI Safety Camp for funding in the past.)
AISC leadership’s involvement in Stop AI protests was not a factor in our no-fund decision (which was made before the post you link to).
For AI safety talent programs, I think it’s quite unlikely we’d consider something like “leadership involvement in protests” on its own as a significant factor in a funding decision. So I don’t think the “it would be political suicide” reasoning you give here is reflective of our decision process.
I edited this post on January 21, 2025, to reflect that we are continuing funding stipends for graduate student organizers for non-EA groups, while stopping funding stipends for undergraduate student organizers. I think that paying grad students for their time is less unconventional than for undergraduates, and also that their opportunity cost is higher on average. Ignoring this distinction was an oversight in the original post.
Upcoming changes to Open Philanthropy’s university group funding
Hey! I lead the GCRCB team at Open Philanthropy, which as part of our portfolio funds “meta EA” stuff (e.g. CEA).
I like the high-level idea here (haven’t thought through the details).
We’re happy to receive proposals like this for media communicating EA ideas and practices. Feel free to apply here, or if you have a more early-stage idea, feel free to DM me on here with a short description — no need for polish — and I’ll get back to you with a quick take about whether it’s something we might be interested in. : )
Dispelling the Anthropic Shadow
What is the base rate for Chinese citizens saying on polls that the Chinese government should regulate X, for any X?
I thought this was interesting & forceful, and am very happy to see it in public writing.
Funding for programs and events on global catastrophic risk, effective altruism, and other topics
Funding for work that builds capacity to address risks from transformative AI
The full letter is available here — was recently posted online as part of this tweet thread.
Zach Robinson will be CEA’s next CEO
(meta musing) The conjunction of the negations of a bunch of statements seems a bit doomed to get a lot of disagreement karma, sadly. Esp. if the statements being negated are “common beliefs” of people like the ones on this forum.
I agreed with some of these and disagreed with others, so I felt unable to agreevote. But I strongly appreciated the post overall so I strong-upvoted.
Similar to that of our other roles, plus experience running a university group as an obvious one — I also think that extroversion and proactive communication are somewhat more important for these roles than for others.
Going to punt on this one as I’m not quite sure what is meant by “systems.”
This is too big to summarize here, unfortunately.
Check out “what kinds of qualities are you looking for in a hire” here. My sense is we index less on previous experience than many other organizations (though it’s still important). Experience juggling many tasks, prioritize, and syncing up with stakeholders jumps to mind. I have a hypothesis that consultant experience would be helpful for this role, but that’s a bit conjectural.
This is a bit TBD — happy to chat more further down the pipeline with any interested candidates.
We look for this in work tests and in previous experience.
The CB team continuously evaluates the track record of grants we’ve made when they’re up for renewal, and this feeds into our sense of how good programs are overall. We also spend a lot of time keeping up with what’s happening in CB and in x-risk generally, and this feeds into our picture of how well CB projects are working.
Check out “what kinds of qualities are you looking for in a hire” here.
Same answer as 2.
Empirically, in hiring rounds I’ve previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn’t make a hire. I’ve also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).
I’m sympathetic to the take “that seems pretty weird.” It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar has been somewhat too high in the past, though this is speaking just for myself. I think when you have a lot of strategic uncertainty, as GCR teams often do, that pushes towards a higher hiring bar as you need people who have a wide variety of skills.
I’d probably also gently push back against the notion that our hiring pool is extremely deep, though that’s obviously relative. I think e.g. our TAIS roles will likely get many fewer applicants than roles for similar applicants doing safety research at labs, for a mix of reasons including salience to relevant people and the fact that OP isn’t competitive with labs on salary.
(As of right now, TAIS has only gotten 53 applicants across all its roles since the ad went up, vs. governance which has gotten ~2x as many — though a lot of people tend to apply right around the deadline.)
AMA: Six Open Philanthropy staffers discuss OP’s new GCR hiring round
Thanks for the reply.
I think “don’t work on climate change[1] if it would trade off against helping one currently identifiable person with a strong need” is a really bizarre/undesirable conclusion for a moral theory to come to, since if widely adopted it seems like this would lead to no one being left to work on climate change. The prospective climate change scientists would instead earn-to-give for AMF.
- ^
Or bettering relations between countries to prevent war, or preventing the rise of a totalitarian regime, etc.
- ^
Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1⁄10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.
It seems like a society where everyone took contractualism to heart might have a hard time coordinating on any large moral issues where the difference any one individual makes is small, including non-x-risk ones like climate change or preventing great power war. What does the contractualist position recommend on these issues?
(In climate change, it’s plausibly the case that “every little bit helps,” while in preventing war between great powers outcomes seem much more discontinuous — not sure if this matters.)
Enjoyed this, strong-upvoted!