Recently, I’ve encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy’s Global Catastrophic Risks (GCR) team does or doesn’t fund and why, especially re: our AI-related grantmaking. So, I’d like to briefly clarify a few things:
Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can’t be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don’t play to our comparative advantages.
Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I’ve seen or heard about why we didn’t fund something are wrong. (Similarly, us choosing to fund someone doesn’t mean we endorse everything about them or their work/plans.)
Very often, when we decline to do or fund something, it’s not because we don’t think it’s good or important, but because we aren’t the right team or organization to do or fund it, or we’re prioritizing other things that quarter.
As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages — whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
While Good Ventures is Open Philanthropy’s largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
We’re happy to see funders enter the space even if they don’t share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We’ve also seen some cases where an organization can have more impact if they’re funded primarily or entirely by non-GV sources. For example, it’s more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OP’s past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
We are excited for individuals and organizations that aren’t a fit for GV funding to apply to some of OP’s GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we’ll recommend it to other funders.
To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
OP’s AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
We think it’s fine and normal for grantees to disagree with us, even in substantial ways. We’ve funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we’ve directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
I’ll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP’s funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a “move fast and break [the world, in this case]” approach.
We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.
I hope these clarifications are helpful, and lead to fruitful discussion, though I don’t expect to have much time to engage with comments here.
Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US “right-of-center”[1] policy work to GV, I would be somewhat surprised that this well-written post didn’t say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It’s generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
I think it would be good to taboo “OP is funding X” at least when talking about present day Open Phil.
Historically, OP would have used the phrase “OP is funding X” to mean “referred a grant to X to GV” (which approximately was never rejected). One was also able to generally safely assume that if OP decides to not recommend a grant to GV, that OP does not think that grant would be more cost-effective than other grants referred to GV (and as such, the word people used to describe OP not referring a grant to GV was “rejecting X” or “defunding X”).
Of course, now that the relationship between OP and GV has substantially changed, and the trust has broken down somewhat, the term “OP is funding X” is confusing (including IMO in your comment, where in your last few bullet points you talk about “OP has given far more to global health than AI” when I think to not confuse people here, you you mean “OP has recommended far more grants to global health”, since OP itself has not actually given away any money directly).
I think the key thing for people to understand is why it no longer makes sense to talk about “OP funding X”, and where it makes sense to model OP grant-referrals to GV to still closely match OPs internal cost-effectiveness estimates.
For organizations and funders trying to orient towards the funding ecosystem, the most important thing that matters is understanding what GV is likely to fund. So when people talk about “OP funding X” or “OP not funding X” that is what they usually refer to (and that is also again how OP has historically used those words, and how you have used those words in your comment). I expect this usage to change over time, but it will take a while (and would ask for you to be gracious and charitable when trying to understand what people mean when they conflate OP and GV in discussions).[1]
Now having gotten that clarification out of the way, my sense is most of the critiques that you have seen about OP funding are much less inaccurate when interpreted through this lens. As Jason says in another comment, it does look like GV has a very limited appetite for grants to right-of-center organizations, and since (as you say yourself) the external funders that you sometimes refer grants to reject the majority of grants you refer to them, this de-facto leads to a large reduction of funding, and a large negative incentive for founders and organizations who are considering working more with the political right.
I think the above is useful, and I think helps people understand some of how OP is trying to counteract the ways GV’s withdrawal from many crucial funding areas has affected things, but I do also think your comment has far too much of the vibe of “nothing has changed in the last year” and “ultimately you shouldn’t worry too much about which areas GV wants or want to not fund”. De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing, and the dynamics between OP and non-GV funders are drastically different than the dynamics historically between OP and GV.
I think a better intutition pump for people trying to understand the funding ecosystem would be a comment that is scope-sensitive in the relevant ways. I think it would start with saying:
Yes, over the last 1-2 years our relationship to GV has changed, and I think it no longer really makes sense to think about OP ‘funding X’. These days, especially in the catastrophic risk space, it makes more sense to think of OP as a middleman between grantees and other foundations and large donors. This is a large shift, and I think understanding how that shift has changed funding allocation is of crucial importance to understand which projects in this space are likely underfunded, and if you are considering starting new organizations or projects, which of those organizations or projects might be able to receive the funding they need to exist.
95%+ of recommendations we make are to GV. When GV does not want to fund something, it is usually up to the degree to which external funders can evaluate those grants mostly on their own, which depends heavily on their more idiosyncratic interests and preferences. My best guess is most grants that we do not refer to GV, but would like to see funded, do not ultimately get funded by other funders.
[Add the rest of your comment, ideally explaining how GV might differ from OP here[2]]
Of course, people might also care about the opinions of OP staff, as people who have been thinking about grantmaking for a long time, but my sense is that in as much as those opinions do not translate into funding, that is of lesser importance when trying to identify neglected niches and funding approaches (but still important).
For example, you say that OP is happy to work with people who are highly critical of OP. That does seem true! However, my honest best guess is that it’s much less true of GV, and being publicly critical of GV and Dustin is the kind of thing that could very much influence whether OP ends up successfully referring a grant to GV, and to some degree being critical of OP also makes receiving funding from GV less likely, though much less so. That is of crucial importance to know for people when trying to decide how open and transparent to be about their opinions.
Recently, I’ve encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy’s Global Catastrophic Risks (GCR) team does or doesn’t fund and why, especially re: our AI-related grantmaking. So, I’d like to briefly clarify a few things:
Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can’t be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don’t play to our comparative advantages.
Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I’ve seen or heard about why we didn’t fund something are wrong. (Similarly, us choosing to fund someone doesn’t mean we endorse everything about them or their work/plans.)
Very often, when we decline to do or fund something, it’s not because we don’t think it’s good or important, but because we aren’t the right team or organization to do or fund it, or we’re prioritizing other things that quarter.
As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages — whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
While Good Ventures is Open Philanthropy’s largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
We’re happy to see funders enter the space even if they don’t share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We’ve also seen some cases where an organization can have more impact if they’re funded primarily or entirely by non-GV sources. For example, it’s more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OP’s past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
We are excited for individuals and organizations that aren’t a fit for GV funding to apply to some of OP’s GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we’ll recommend it to other funders.
To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
OP’s AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
We think it’s fine and normal for grantees to disagree with us, even in substantial ways. We’ve funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we’ve directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
I’ll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP’s funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a “move fast and break [the world, in this case]” approach.
We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.
I hope these clarifications are helpful, and lead to fruitful discussion, though I don’t expect to have much time to engage with comments here.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US “right-of-center”[1] policy work to GV, I would be somewhat surprised that this well-written post didn’t say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It’s generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
I place this in quotes because the term is ambiguous.
(Fwiw, the community prediction on the Metaculus question ‘Will there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026?’ currently sits at 43%.)
I think it would be good to taboo “OP is funding X” at least when talking about present day Open Phil.
Historically, OP would have used the phrase “OP is funding X” to mean “referred a grant to X to GV” (which approximately was never rejected). One was also able to generally safely assume that if OP decides to not recommend a grant to GV, that OP does not think that grant would be more cost-effective than other grants referred to GV (and as such, the word people used to describe OP not referring a grant to GV was “rejecting X” or “defunding X”).
Of course, now that the relationship between OP and GV has substantially changed, and the trust has broken down somewhat, the term “OP is funding X” is confusing (including IMO in your comment, where in your last few bullet points you talk about “OP has given far more to global health than AI” when I think to not confuse people here, you you mean “OP has recommended far more grants to global health”, since OP itself has not actually given away any money directly).
I think the key thing for people to understand is why it no longer makes sense to talk about “OP funding X”, and where it makes sense to model OP grant-referrals to GV to still closely match OPs internal cost-effectiveness estimates.
For organizations and funders trying to orient towards the funding ecosystem, the most important thing that matters is understanding what GV is likely to fund. So when people talk about “OP funding X” or “OP not funding X” that is what they usually refer to (and that is also again how OP has historically used those words, and how you have used those words in your comment). I expect this usage to change over time, but it will take a while (and would ask for you to be gracious and charitable when trying to understand what people mean when they conflate OP and GV in discussions).[1]
Now having gotten that clarification out of the way, my sense is most of the critiques that you have seen about OP funding are much less inaccurate when interpreted through this lens. As Jason says in another comment, it does look like GV has a very limited appetite for grants to right-of-center organizations, and since (as you say yourself) the external funders that you sometimes refer grants to reject the majority of grants you refer to them, this de-facto leads to a large reduction of funding, and a large negative incentive for founders and organizations who are considering working more with the political right.
I think the above is useful, and I think helps people understand some of how OP is trying to counteract the ways GV’s withdrawal from many crucial funding areas has affected things, but I do also think your comment has far too much of the vibe of “nothing has changed in the last year” and “ultimately you shouldn’t worry too much about which areas GV wants or want to not fund”. De-facto GV was and is likely to continue to be 95%+ of the giving that OP is influencing, and the dynamics between OP and non-GV funders are drastically different than the dynamics historically between OP and GV.
I think a better intutition pump for people trying to understand the funding ecosystem would be a comment that is scope-sensitive in the relevant ways. I think it would start with saying:
Of course, people might also care about the opinions of OP staff, as people who have been thinking about grantmaking for a long time, but my sense is that in as much as those opinions do not translate into funding, that is of lesser importance when trying to identify neglected niches and funding approaches (but still important).
For example, you say that OP is happy to work with people who are highly critical of OP. That does seem true! However, my honest best guess is that it’s much less true of GV, and being publicly critical of GV and Dustin is the kind of thing that could very much influence whether OP ends up successfully referring a grant to GV, and to some degree being critical of OP also makes receiving funding from GV less likely, though much less so. That is of crucial importance to know for people when trying to decide how open and transparent to be about their opinions.