We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.
Weâve left a few comments below.
*****
The importance of managed exits
We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:
Helps grantees feel comfortable starting and scaling projects. Weâve seen grantees turn down increased funding because they were reluctant to invest in major initiatives; they were concerned that we might suddenly change our priorities and force them to downsize (firing staff, ending projects half-finished, etc.)
Helps us hire excellent program officers. The people we ask to lead our grantmaking often have many other good options. We donât want a promising candidate to worry that theyâll suddenly lose their job if we stop supporting the program they work on.
Exiting a program requires balancing:
the cost of additional below-the-bar spending during a slow exit;
the risks from a faster exit (difficulty accessing grant opportunities or hiring the best program officers, as well as damage to the field itself).
We launched the CJR program early in our history. At the time, we knew that committing to causes was important, but we had no experience in setting expectations about a programâs longevity or what an exit might look like. When we decided to spin off CJR, we wanted to do so in a way that inspired trust from future grantees and program staff. In the end, we struck what felt to us like an appropriate balance between âslowâ and âfastâ.[1]
Itâs plausible that we could have achieved this trust by investing less money and more time/âenergy. But at the time, we were struggling to scale our organizational capacity to match our available funding; we decided that other capacity-strained projects were a priority.
*****
Open Phil is not a unitary agent
Running an organization involves making compromises between people with different points of view â especially in the case of Open Phil, which explicitly hires people with different worldviews to work on different causes. This is especially true for cases where an earlier decision has created potential implicit commitments that affect a later decision.
I would avoid trying to model Open Phil (or other organizations) as unitary agents whose actions will match a single utility function. The way we handle one situation may not carry over to other situations.
If this dynamic leads you to put less âtrustâ in our decisions, we think thatâs a good thing! We try to make good decisions and often explain our thinking, but we donât think others should be assuming that all of our decisions are âcorrectâ (or would match the decisions you would make if you had access to all of the relevant info).
*****
âBy working in this area, one could gain leverage, for instance [...] leverage over the grantmaking in the area, by seeding Just Impact.â
Indeed, part of our reason for seeding Just Impact was that it could go on to raise a lot more money, resulting in a lot of counterfactual impact. That kind of leverage can take funding from below the bar to above it.
*****
Open Philanthropy might gain experience in grantmaking, learn information, and acquire expertise that would be valuable for other types of giving. In the case of criminal justice reform, I would guess that the specific cause officersârather than Open Philanthropy as an institutionâwould gain most of the information. I would also guess that the lessons learnt havenât generalized to, for instance, pandemic prevention funding advocacy.
This doesnât accord with our experience. Over six years of working closely with Chloe, we learned a lot about effective funding in policy and advocacy in ways we do expect to accrue to other focus areas. She was also a major factor when we updated our grantmaking process to emphasize the importance of an organizationâs leadership for the success of a grant.
Itâs possible that we would have learned these lessons otherwise, but given that Chloe was our first program officer, a disproportionate amount of organizational learning came from our early time working with her, and those experiences have informed our practices.
Note that when we launched our programs in South Asian Air Quality and Global Aid Policy, we explicitly stated that we âexpect to work in [these areas] for at least five yearsâ. This decision comes from the experience weâve developed around setting expectations.
So one of the things Iâm still confused is about having two spikes in funding, one in 2019 and the other one in 2021, both of which can be interpreted as parting grants:
So OP gave half of the funding to criminal justice reform ($100M out of $200M) after writing GiveWellâs Top Charities Are (Increasingly) Hard to Beat, and this makes me less likely to think about this in terms of exit grant and more in terms of, idk, some sort of nefariousness/âshenanigans.
The 2019 âspikeâ you highlight doesnât represent higher overall spending â itâs a quirk of how we record grants on the website.
Each program officer has an annual grantmaking âbudgetâ, which rolls over into the next year if it goes unspent. The CJR budget was a consistent ~$25 million/âyear from 2017 through 2021. If you subtract the Just Impact spin-out at the end of 2021, youâll see that the total grantmaking over that period matches the total budget.
So why does published grantmaking look higher in 2019?
The reason is that our published grants generally âfrontloadâ payment amounts â if weâre making three payments of $3 million in each of 2019, 2020, and 2021, that will appear as a $9 million grant published in 2019.
In the second half of 2019, the CJR team made a number of large, multi-year grants â but payments in future years still came out of their budget for those years, which is why the published totals look lower in 2020 and 2021 (minus Just Impact). Spending against the CJR budget in 2019 was $24 million â slightly under budget.
So the actual picture here is âCJRâs budget was consistent from 2017-2021 until the spin-outâ, not âCJRâs budget spiked in the second half of 2019âł.
AG: Ah, but itâs not higher spending. Because of our accounting practices, itâs rather an increase in future funding commitments. So your chart isnât about âspendingâ itâs about âlocked-in spending commitmentsâ. And in fact, in the next few years, spending-as-recorded goes down because the locked-in-funding is spent.
NS: But why the increase in locked-in funding commitments in 2019. It still seems suspicious, even if marginally less so.
AG: Because we frontload our grants; many of the grants in 2019 were for grantees to use for 2-3 years.
NS: I donât buy that. I know that many of the grants in 2019 were multi-year (frontloaded), but previous grants in the space were not as frontloaded, or not as frontloaded in that volume. So I think there is still something Iâm curious about, even if the mechanistic aspect is more clear to me now.
AG: ÂŻ\_(ă)_/âÂŻ (I donât know what you would say here.)
If this dynamic leads you to put less âtrustâ in our decisions, I think thatâs a good thing!
I will push back a bit on this as well. I think itâs very healthy for the community to be skeptical of Open Philanthropyâs reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I donât think itâs great if we have a dynamic where the community is skeptical of Open Philanthropyâs intentions. Basically, thereâs a big difference between âOP made a mistake because they over/âunderrated Xâ and âOP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.â
Basically, thereâs a big difference between âOP made a mistake because they over/âunderrated Xâ and âOP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.â
The synthesis position might be something like âsome subset of OP made a mistake because they were subconsciously politically or PR motivated and unintentionally made sub-optimal grants.â
I think this is a reasonable candidate hypothesis, and should not be that much of a surprise, all things considered. Weâre all human.
FWIW I would be surprised to see you, Linch, make a suboptimal grant out of PR motivation. I think Open Phil is capable of being in a place where it can avoid making noticeably-suboptimal grants due to bad subconscious motivations.
I agree that thereâs a difference in the social dynamics of being vigilant about mistakes vs being vigilant about intentions. I agree with your point in the sense that worlds in which the community is skeptical of OPâs intentions tend to have worse social dynamics than worlds in which it isnât. But you seem to be implying something beyond that; that people should be less skeptical of OPâs intentions given the evidence we see right now, and/âor that people should be more hesitant to express that skepticism. Am I understanding you correctly, and whatâs your reasoning here?
My intuition is that a norm against expressing skepticism of orgsâ intentions wouldnât usefully reduce community skepticism, because community members can just see this norm and infer that thereâs probably some private skepticism (just like I update when reading your comment and the tone of the rest of the thread). And without open communication, community membersâ level of skepticism will be noisier (for example, Nuño starting out much more trusting and deferential than the EA average before he started looking into this).
I agree with you, but unfortunately I think itâs inevitable that people doubt the intentions of any privately-managed organisation. This is perhaps an argument for more democratic funding (though one could counter-argue about the motivations of democratically chosen representatives).
Did you also think that breadth of cause exploration is important?
I think [the commitment to causes and hiring expert staff] model makes a great deal of sense ⊠Yet Iâm not convinced that this model is the right one for us. Depth comes at the price of breadth.
It seems that you were conducting shallow and medium-depth investigations since late 2014. So, if there were some suboptimal commitments early on these should have been shown by alternatives that the staff would probably be excited about, since I assume that everyone aims for high impact, given specific expertise.
So, it would depend on the nature of the commitments that earlier decisions created: if these were to create high impact within oneâs expertise, then that should be great, even if the expertise is US criminal justice reform, specifically.[1] If multiple such focused individuals exchange perspectives, a set of complementary[2] interventions that covers a wide cause landscape emerges.
If this dynamic leads you to put less âtrustâ in our decisions, I think thatâs a good thing!
If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
Indeed, part of our reason for seeding Just Impact was that it could go on to raise a lot more money, resulting in a lot of counterfactual impact. That kind of leverage can take funding from below the bar to above it.
Are you sure that the counterfactual impact is positive, or more positive without your âdirect oversight?â For example, it can be that Just Impact donors would have otherwise donated to crime prevention abroad,[3] if another organization influenced them before they learn about Just Impact, which solicits a commitment? Or, it can be that US CJR donors would not have donated to other effective causes were they not first introduced to effective giving by Just Impact. Further, do you think that Just Impact can take less advantage of communication with experts in other OPP cause areas (which could create important leverages) when it is an independent organization?
I appreciate the response here, but would flag that this came off, to me, as a bit mean-spirited.
One specific part: > If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
1. He said âless trustâ, not ânot trust at allâ. I took that to mean something like, âdonât place absolute reverence in our public messaging.â 2. Iâm sure anyone reasonable would acknowledge that their recommendations are less than optimal. 3. âWhere would you suggest that impact-focused donors in EA lookâ â Thereâs not one true source that you should only pay attention to. You should probably look at a diversity of sources, including OPâs work.
âless trustâ, not ânot trust at allâ. I took that to mean something like, âdonât place absolute reverence in our public messaging.â ⊠look at a diversity of sources, including OPâs work.
(Writing from OPâs point of view here.)
We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.
Weâve left a few comments below.
*****
The importance of managed exits
We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:
Helps grantees feel comfortable starting and scaling projects. Weâve seen grantees turn down increased funding because they were reluctant to invest in major initiatives; they were concerned that we might suddenly change our priorities and force them to downsize (firing staff, ending projects half-finished, etc.)
Helps us hire excellent program officers. The people we ask to lead our grantmaking often have many other good options. We donât want a promising candidate to worry that theyâll suddenly lose their job if we stop supporting the program they work on.
Exiting a program requires balancing:
the cost of additional below-the-bar spending during a slow exit;
the risks from a faster exit (difficulty accessing grant opportunities or hiring the best program officers, as well as damage to the field itself).
We launched the CJR program early in our history. At the time, we knew that committing to causes was important, but we had no experience in setting expectations about a programâs longevity or what an exit might look like. When we decided to spin off CJR, we wanted to do so in a way that inspired trust from future grantees and program staff. In the end, we struck what felt to us like an appropriate balance between âslowâ and âfastâ.[1]
Itâs plausible that we could have achieved this trust by investing less money and more time/âenergy. But at the time, we were struggling to scale our organizational capacity to match our available funding; we decided that other capacity-strained projects were a priority.
*****
Open Phil is not a unitary agent
Running an organization involves making compromises between people with different points of view â especially in the case of Open Phil, which explicitly hires people with different worldviews to work on different causes. This is especially true for cases where an earlier decision has created potential implicit commitments that affect a later decision.
I would avoid trying to model Open Phil (or other organizations) as unitary agents whose actions will match a single utility function. The way we handle one situation may not carry over to other situations.
If this dynamic leads you to put less âtrustâ in our decisions, we think thatâs a good thing! We try to make good decisions and often explain our thinking, but we donât think others should be assuming that all of our decisions are âcorrectâ (or would match the decisions you would make if you had access to all of the relevant info).
*****
Indeed, part of our reason for seeding Just Impact was that it could go on to raise a lot more money, resulting in a lot of counterfactual impact. That kind of leverage can take funding from below the bar to above it.
*****
This doesnât accord with our experience. Over six years of working closely with Chloe, we learned a lot about effective funding in policy and advocacy in ways we do expect to accrue to other focus areas. She was also a major factor when we updated our grantmaking process to emphasize the importance of an organizationâs leadership for the success of a grant.
Itâs possible that we would have learned these lessons otherwise, but given that Chloe was our first program officer, a disproportionate amount of organizational learning came from our early time working with her, and those experiences have informed our practices.
Note that when we launched our programs in South Asian Air Quality and Global Aid Policy, we explicitly stated that we âexpect to work in [these areas] for at least five yearsâ. This decision comes from the experience weâve developed around setting expectations.
So one of the things Iâm still confused is about having two spikes in funding, one in 2019 and the other one in 2021, both of which can be interpreted as parting grants:
So OP gave half of the funding to criminal justice reform ($100M out of $200M) after writing GiveWellâs Top Charities Are (Increasingly) Hard to Beat, and this makes me less likely to think about this in terms of exit grant and more in terms of, idk, some sort of nefariousness/âshenanigans.
The 2019 âspikeâ you highlight doesnât represent higher overall spending â itâs a quirk of how we record grants on the website.
Each program officer has an annual grantmaking âbudgetâ, which rolls over into the next year if it goes unspent. The CJR budget was a consistent ~$25 million/âyear from 2017 through 2021. If you subtract the Just Impact spin-out at the end of 2021, youâll see that the total grantmaking over that period matches the total budget.
So why does published grantmaking look higher in 2019?
The reason is that our published grants generally âfrontloadâ payment amounts â if weâre making three payments of $3 million in each of 2019, 2020, and 2021, that will appear as a $9 million grant published in 2019.
In the second half of 2019, the CJR team made a number of large, multi-year grants â but payments in future years still came out of their budget for those years, which is why the published totals look lower in 2020 and 2021 (minus Just Impact). Spending against the CJR budget in 2019 was $24 million â slightly under budget.
So the actual picture here is âCJRâs budget was consistent from 2017-2021 until the spin-outâ, not âCJRâs budget spiked in the second half of 2019âł.
So this doesnât really dissolve my curiosity.
In dialog form, because otherwise this would have been a really long paragraph:
NS: I think that the spike in funding in 2019, right after the GiveWellâs Top Charities Are (Increasingly) Hard to Beat blogpost, is suspicious
AG: Ah, but itâs not higher spending. Because of our accounting practices, itâs rather an increase in future funding commitments. So your chart isnât about âspendingâ itâs about âlocked-in spending commitmentsâ. And in fact, in the next few years, spending-as-recorded goes down because the locked-in-funding is spent.
NS: But why the increase in locked-in funding commitments in 2019. It still seems suspicious, even if marginally less so.
AG: Because we frontload our grants; many of the grants in 2019 were for grantees to use for 2-3 years.
NS: I donât buy that. I know that many of the grants in 2019 were multi-year (frontloaded), but previous grants in the space were not as frontloaded, or not as frontloaded in that volume. So I think there is still something Iâm curious about, even if the mechanistic aspect is more clear to me now.
AG: ÂŻ\_(ă)_/âÂŻ (I donât know what you would say here.)
I will push back a bit on this as well. I think itâs very healthy for the community to be skeptical of Open Philanthropyâs reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I donât think itâs great if we have a dynamic where the community is skeptical of Open Philanthropyâs intentions. Basically, thereâs a big difference between âOP made a mistake because they over/âunderrated Xâ and âOP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.â
The synthesis position might be something like âsome subset of OP made a mistake because they were subconsciously politically or PR motivated and unintentionally made sub-optimal grants.â
I think this is a reasonable candidate hypothesis, and should not be that much of a surprise, all things considered. Weâre all human.
FWIW I would be surprised to see you, Linch, make a suboptimal grant out of PR motivation. I think Open Phil is capable of being in a place where it can avoid making noticeably-suboptimal grants due to bad subconscious motivations.
I agree that thereâs a difference in the social dynamics of being vigilant about mistakes vs being vigilant about intentions. I agree with your point in the sense that worlds in which the community is skeptical of OPâs intentions tend to have worse social dynamics than worlds in which it isnât.
But you seem to be implying something beyond that; that people should be less skeptical of OPâs intentions given the evidence we see right now, and/âor that people should be more hesitant to express that skepticism. Am I understanding you correctly, and whatâs your reasoning here?
My intuition is that a norm against expressing skepticism of orgsâ intentions wouldnât usefully reduce community skepticism, because community members can just see this norm and infer that thereâs probably some private skepticism (just like I update when reading your comment and the tone of the rest of the thread). And without open communication, community membersâ level of skepticism will be noisier (for example, Nuño starting out much more trusting and deferential than the EA average before he started looking into this).
I agree with you, but unfortunately I think itâs inevitable that people doubt the intentions of any privately-managed organisation. This is perhaps an argument for more democratic funding (though one could counter-argue about the motivations of democratically chosen representatives).
Did you also think that breadth of cause exploration is important?
It seems that you were conducting shallow and medium-depth investigations since late 2014. So, if there were some suboptimal commitments early on these should have been shown by alternatives that the staff would probably be excited about, since I assume that everyone aims for high impact, given specific expertise.
So, it would depend on the nature of the commitments that earlier decisions created: if these were to create high impact within oneâs expertise, then that should be great, even if the expertise is US criminal justice reform, specifically.[1] If multiple such focused individuals exchange perspectives, a set of complementary[2] interventions that covers a wide cause landscape emerges.
If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
Are you sure that the counterfactual impact is positive, or more positive without your âdirect oversight?â For example, it can be that Just Impact donors would have otherwise donated to crime prevention abroad,[3] if another organization influenced them before they learn about Just Impact, which solicits a commitment? Or, it can be that US CJR donors would not have donated to other effective causes were they not first introduced to effective giving by Just Impact. Further, do you think that Just Impact can take less advantage of communication with experts in other OPP cause areas (which could create important leverages) when it is an independent organization?
I appreciate the response here, but would flag that this came off, to me, as a bit mean-spirited.
One specific part:
> If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
1. He said âless trustâ, not ânot trust at allâ. I took that to mean something like, âdonât place absolute reverence in our public messaging.â
2. Iâm sure anyone reasonable would acknowledge that their recommendations are less than optimal.
3. âWhere would you suggest that impact-focused donors in EA lookâ â Thereâs not one true source that you should only pay attention to. You should probably look at a diversity of sources, including OPâs work.
That makes sense, probably the solution.