We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.
We’ve left a few comments below.
*****
The importance of managed exits
We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:
Helps grantees feel comfortable starting and scaling projects. We’ve seen grantees turn down increased funding because they were reluctant to invest in major initiatives; they were concerned that we might suddenly change our priorities and force them to downsize (firing staff, ending projects half-finished, etc.)
Helps us hire excellent program officers. The people we ask to lead our grantmaking often have many other good options. We don’t want a promising candidate to worry that they’ll suddenly lose their job if we stop supporting the program they work on.
Exiting a program requires balancing:
the cost of additional below-the-bar spending during a slow exit;
the risks from a faster exit (difficulty accessing grant opportunities or hiring the best program officers, as well as damage to the field itself).
We launched the CJR program early in our history. At the time, we knew that committing to causes was important, but we had no experience in setting expectations about a program’s longevity or what an exit might look like. When we decided to spin off CJR, we wanted to do so in a way that inspired trust from future grantees and program staff. In the end, we struck what felt to us like an appropriate balance between “slow” and “fast”.[1]
It’s plausible that we could have achieved this trust by investing less money and more time/energy. But at the time, we were struggling to scale our organizational capacity to match our available funding; we decided that other capacity-strained projects were a priority.
*****
Open Phil is not a unitary agent
Running an organization involves making compromises between people with different points of view — especially in the case of Open Phil, which explicitly hires people with different worldviews to work on different causes. This is especially true for cases where an earlier decision has created potential implicit commitments that affect a later decision.
I would avoid trying to model Open Phil (or other organizations) as unitary agents whose actions will match a single utility function. The way we handle one situation may not carry over to other situations.
If this dynamic leads you to put less “trust” in our decisions, we think that’s a good thing! We try to make good decisions and often explain our thinking, but we don’t think others should be assuming that all of our decisions are “correct” (or would match the decisions you would make if you had access to all of the relevant info).
*****
“By working in this area, one could gain leverage, for instance [...] leverage over the grantmaking in the area, by seeding Just Impact.”
Indeed, part of our reason for seeding Just Impact was that it could go on to raise a lot more money, resulting in a lot of counterfactual impact. That kind of leverage can take funding from below the bar to above it.
*****
Open Philanthropy might gain experience in grantmaking, learn information, and acquire expertise that would be valuable for other types of giving. In the case of criminal justice reform, I would guess that the specific cause officers—rather than Open Philanthropy as an institution—would gain most of the information. I would also guess that the lessons learnt haven’t generalized to, for instance, pandemic prevention funding advocacy.
This doesn’t accord with our experience. Over six years of working closely with Chloe, we learned a lot about effective funding in policy and advocacy in ways we do expect to accrue to other focus areas. She was also a major factor when we updated our grantmaking process to emphasize the importance of an organization’s leadership for the success of a grant.
It’s possible that we would have learned these lessons otherwise, but given that Chloe was our first program officer, a disproportionate amount of organizational learning came from our early time working with her, and those experiences have informed our practices.
Note that when we launched our programs in South Asian Air Quality and Global Aid Policy, we explicitly stated that we “expect to work in [these areas] for at least five years”. This decision comes from the experience we’ve developed around setting expectations.
So one of the things I’m still confused is about having two spikes in funding, one in 2019 and the other one in 2021, both of which can be interpreted as parting grants:
So OP gave half of the funding to criminal justice reform ($100M out of $200M) after writing GiveWell’s Top Charities Are (Increasingly) Hard to Beat, and this makes me less likely to think about this in terms of exit grant and more in terms of, idk, some sort of nefariousness/shenanigans.
The 2019 ‘spike’ you highlight doesn’t represent higher overall spending — it’s a quirk of how we record grants on the website.
Each program officer has an annual grantmaking “budget”, which rolls over into the next year if it goes unspent. The CJR budget was a consistent ~$25 million/year from 2017 through 2021. If you subtract the Just Impact spin-out at the end of 2021, you’ll see that the total grantmaking over that period matches the total budget.
So why does published grantmaking look higher in 2019?
The reason is that our published grants generally “frontload” payment amounts — if we’re making three payments of $3 million in each of 2019, 2020, and 2021, that will appear as a $9 million grant published in 2019.
In the second half of 2019, the CJR team made a number of large, multi-year grants — but payments in future years still came out of their budget for those years, which is why the published totals look lower in 2020 and 2021 (minus Just Impact). Spending against the CJR budget in 2019 was $24 million — slightly under budget.
So the actual picture here is “CJR’s budget was consistent from 2017-2021 until the spin-out”, not “CJR’s budget spiked in the second half of 2019″.
AG: Ah, but it’s not higher spending. Because of our accounting practices, it’s rather an increase in future funding commitments. So your chart isn’t about “spending” it’s about “locked-in spending commitments”. And in fact, in the next few years, spending-as-recorded goes down because the locked-in-funding is spent.
NS: But why the increase in locked-in funding commitments in 2019. It still seems suspicious, even if marginally less so.
AG: Because we frontload our grants; many of the grants in 2019 were for grantees to use for 2-3 years.
NS: I don’t buy that. I know that many of the grants in 2019 were multi-year (frontloaded), but previous grants in the space were not as frontloaded, or not as frontloaded in that volume. So I think there is still something I’m curious about, even if the mechanistic aspect is more clear to me now.
AG: ¯\_(ツ)_/¯ (I don’t know what you would say here.)
If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!
I will push back a bit on this as well. I think it’s very healthy for the community to be skeptical of Open Philanthropy’s reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I don’t think it’s great if we have a dynamic where the community is skeptical of Open Philanthropy’s intentions. Basically, there’s a big difference between “OP made a mistake because they over/underrated X” and “OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.”
Basically, there’s a big difference between “OP made a mistake because they over/underrated X” and “OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.”
The synthesis position might be something like “some subset of OP made a mistake because they were subconsciously politically or PR motivated and unintentionally made sub-optimal grants.”
I think this is a reasonable candidate hypothesis, and should not be that much of a surprise, all things considered. We’re all human.
FWIW I would be surprised to see you, Linch, make a suboptimal grant out of PR motivation. I think Open Phil is capable of being in a place where it can avoid making noticeably-suboptimal grants due to bad subconscious motivations.
I agree that there’s a difference in the social dynamics of being vigilant about mistakes vs being vigilant about intentions. I agree with your point in the sense that worlds in which the community is skeptical of OP’s intentions tend to have worse social dynamics than worlds in which it isn’t. But you seem to be implying something beyond that; that people should be less skeptical of OP’s intentions given the evidence we see right now, and/or that people should be more hesitant to express that skepticism. Am I understanding you correctly, and what’s your reasoning here?
My intuition is that a norm against expressing skepticism of orgs’ intentions wouldn’t usefully reduce community skepticism, because community members can just see this norm and infer that there’s probably some private skepticism (just like I update when reading your comment and the tone of the rest of the thread). And without open communication, community members’ level of skepticism will be noisier (for example, Nuño starting out much more trusting and deferential than the EA average before he started looking into this).
I agree with you, but unfortunately I think it’s inevitable that people doubt the intentions of any privately-managed organisation. This is perhaps an argument for more democratic funding (though one could counter-argue about the motivations of democratically chosen representatives).
Did you also think that breadth of cause exploration is important?
I think [the commitment to causes and hiring expert staff] model makes a great deal of sense … Yet I’m not convinced that this model is the right one for us. Depth comes at the price of breadth.
It seems that you were conducting shallow and medium-depth investigations since late 2014. So, if there were some suboptimal commitments early on these should have been shown by alternatives that the staff would probably be excited about, since I assume that everyone aims for high impact, given specific expertise.
So, it would depend on the nature of the commitments that earlier decisions created: if these were to create high impact within one’s expertise, then that should be great, even if the expertise is US criminal justice reform, specifically.[1] If multiple such focused individuals exchange perspectives, a set of complementary[2] interventions that covers a wide cause landscape emerges.
If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!
If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
Indeed, part of our reason for seeding Just Impact was that it could go on to raise a lot more money, resulting in a lot of counterfactual impact. That kind of leverage can take funding from below the bar to above it.
Are you sure that the counterfactual impact is positive, or more positive without your ‘direct oversight?’ For example, it can be that Just Impact donors would have otherwise donated to crime prevention abroad,[3] if another organization influenced them before they learn about Just Impact, which solicits a commitment? Or, it can be that US CJR donors would not have donated to other effective causes were they not first introduced to effective giving by Just Impact. Further, do you think that Just Impact can take less advantage of communication with experts in other OPP cause areas (which could create important leverages) when it is an independent organization?
I appreciate the response here, but would flag that this came off, to me, as a bit mean-spirited.
One specific part: > If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
1. He said “less trust”, not “not trust at all”. I took that to mean something like, “don’t place absolute reverence in our public messaging.” 2. I’m sure anyone reasonable would acknowledge that their recommendations are less than optimal. 3. “Where would you suggest that impact-focused donors in EA look” → There’s not one true source that you should only pay attention to. You should probably look at a diversity of sources, including OP’s work.
“less trust”, not “not trust at all”. I took that to mean something like, “don’t place absolute reverence in our public messaging.” … look at a diversity of sources, including OP’s work.
(Writing from OP’s point of view here.)
We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.
We’ve left a few comments below.
*****
The importance of managed exits
We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:
Helps grantees feel comfortable starting and scaling projects. We’ve seen grantees turn down increased funding because they were reluctant to invest in major initiatives; they were concerned that we might suddenly change our priorities and force them to downsize (firing staff, ending projects half-finished, etc.)
Helps us hire excellent program officers. The people we ask to lead our grantmaking often have many other good options. We don’t want a promising candidate to worry that they’ll suddenly lose their job if we stop supporting the program they work on.
Exiting a program requires balancing:
the cost of additional below-the-bar spending during a slow exit;
the risks from a faster exit (difficulty accessing grant opportunities or hiring the best program officers, as well as damage to the field itself).
We launched the CJR program early in our history. At the time, we knew that committing to causes was important, but we had no experience in setting expectations about a program’s longevity or what an exit might look like. When we decided to spin off CJR, we wanted to do so in a way that inspired trust from future grantees and program staff. In the end, we struck what felt to us like an appropriate balance between “slow” and “fast”.[1]
It’s plausible that we could have achieved this trust by investing less money and more time/energy. But at the time, we were struggling to scale our organizational capacity to match our available funding; we decided that other capacity-strained projects were a priority.
*****
Open Phil is not a unitary agent
Running an organization involves making compromises between people with different points of view — especially in the case of Open Phil, which explicitly hires people with different worldviews to work on different causes. This is especially true for cases where an earlier decision has created potential implicit commitments that affect a later decision.
I would avoid trying to model Open Phil (or other organizations) as unitary agents whose actions will match a single utility function. The way we handle one situation may not carry over to other situations.
If this dynamic leads you to put less “trust” in our decisions, we think that’s a good thing! We try to make good decisions and often explain our thinking, but we don’t think others should be assuming that all of our decisions are “correct” (or would match the decisions you would make if you had access to all of the relevant info).
*****
Indeed, part of our reason for seeding Just Impact was that it could go on to raise a lot more money, resulting in a lot of counterfactual impact. That kind of leverage can take funding from below the bar to above it.
*****
This doesn’t accord with our experience. Over six years of working closely with Chloe, we learned a lot about effective funding in policy and advocacy in ways we do expect to accrue to other focus areas. She was also a major factor when we updated our grantmaking process to emphasize the importance of an organization’s leadership for the success of a grant.
It’s possible that we would have learned these lessons otherwise, but given that Chloe was our first program officer, a disproportionate amount of organizational learning came from our early time working with her, and those experiences have informed our practices.
Note that when we launched our programs in South Asian Air Quality and Global Aid Policy, we explicitly stated that we “expect to work in [these areas] for at least five years”. This decision comes from the experience we’ve developed around setting expectations.
So one of the things I’m still confused is about having two spikes in funding, one in 2019 and the other one in 2021, both of which can be interpreted as parting grants:
So OP gave half of the funding to criminal justice reform ($100M out of $200M) after writing GiveWell’s Top Charities Are (Increasingly) Hard to Beat, and this makes me less likely to think about this in terms of exit grant and more in terms of, idk, some sort of nefariousness/shenanigans.
The 2019 ‘spike’ you highlight doesn’t represent higher overall spending — it’s a quirk of how we record grants on the website.
Each program officer has an annual grantmaking “budget”, which rolls over into the next year if it goes unspent. The CJR budget was a consistent ~$25 million/year from 2017 through 2021. If you subtract the Just Impact spin-out at the end of 2021, you’ll see that the total grantmaking over that period matches the total budget.
So why does published grantmaking look higher in 2019?
The reason is that our published grants generally “frontload” payment amounts — if we’re making three payments of $3 million in each of 2019, 2020, and 2021, that will appear as a $9 million grant published in 2019.
In the second half of 2019, the CJR team made a number of large, multi-year grants — but payments in future years still came out of their budget for those years, which is why the published totals look lower in 2020 and 2021 (minus Just Impact). Spending against the CJR budget in 2019 was $24 million — slightly under budget.
So the actual picture here is “CJR’s budget was consistent from 2017-2021 until the spin-out”, not “CJR’s budget spiked in the second half of 2019″.
So this doesn’t really dissolve my curiosity.
In dialog form, because otherwise this would have been a really long paragraph:
NS: I think that the spike in funding in 2019, right after the GiveWell’s Top Charities Are (Increasingly) Hard to Beat blogpost, is suspicious
AG: Ah, but it’s not higher spending. Because of our accounting practices, it’s rather an increase in future funding commitments. So your chart isn’t about “spending” it’s about “locked-in spending commitments”. And in fact, in the next few years, spending-as-recorded goes down because the locked-in-funding is spent.
NS: But why the increase in locked-in funding commitments in 2019. It still seems suspicious, even if marginally less so.
AG: Because we frontload our grants; many of the grants in 2019 were for grantees to use for 2-3 years.
NS: I don’t buy that. I know that many of the grants in 2019 were multi-year (frontloaded), but previous grants in the space were not as frontloaded, or not as frontloaded in that volume. So I think there is still something I’m curious about, even if the mechanistic aspect is more clear to me now.
AG: ¯\_(ツ)_/¯ (I don’t know what you would say here.)
I will push back a bit on this as well. I think it’s very healthy for the community to be skeptical of Open Philanthropy’s reasoning ability, and to be vigilant about trying to point out errors.
On the other hand, I don’t think it’s great if we have a dynamic where the community is skeptical of Open Philanthropy’s intentions. Basically, there’s a big difference between “OP made a mistake because they over/underrated X” and “OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants.”
The synthesis position might be something like “some subset of OP made a mistake because they were subconsciously politically or PR motivated and unintentionally made sub-optimal grants.”
I think this is a reasonable candidate hypothesis, and should not be that much of a surprise, all things considered. We’re all human.
FWIW I would be surprised to see you, Linch, make a suboptimal grant out of PR motivation. I think Open Phil is capable of being in a place where it can avoid making noticeably-suboptimal grants due to bad subconscious motivations.
I agree that there’s a difference in the social dynamics of being vigilant about mistakes vs being vigilant about intentions. I agree with your point in the sense that worlds in which the community is skeptical of OP’s intentions tend to have worse social dynamics than worlds in which it isn’t.
But you seem to be implying something beyond that; that people should be less skeptical of OP’s intentions given the evidence we see right now, and/or that people should be more hesitant to express that skepticism. Am I understanding you correctly, and what’s your reasoning here?
My intuition is that a norm against expressing skepticism of orgs’ intentions wouldn’t usefully reduce community skepticism, because community members can just see this norm and infer that there’s probably some private skepticism (just like I update when reading your comment and the tone of the rest of the thread). And without open communication, community members’ level of skepticism will be noisier (for example, Nuño starting out much more trusting and deferential than the EA average before he started looking into this).
I agree with you, but unfortunately I think it’s inevitable that people doubt the intentions of any privately-managed organisation. This is perhaps an argument for more democratic funding (though one could counter-argue about the motivations of democratically chosen representatives).
Did you also think that breadth of cause exploration is important?
It seems that you were conducting shallow and medium-depth investigations since late 2014. So, if there were some suboptimal commitments early on these should have been shown by alternatives that the staff would probably be excited about, since I assume that everyone aims for high impact, given specific expertise.
So, it would depend on the nature of the commitments that earlier decisions created: if these were to create high impact within one’s expertise, then that should be great, even if the expertise is US criminal justice reform, specifically.[1] If multiple such focused individuals exchange perspectives, a set of complementary[2] interventions that covers a wide cause landscape emerges.
If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
Are you sure that the counterfactual impact is positive, or more positive without your ‘direct oversight?’ For example, it can be that Just Impact donors would have otherwise donated to crime prevention abroad,[3] if another organization influenced them before they learn about Just Impact, which solicits a commitment? Or, it can be that US CJR donors would not have donated to other effective causes were they not first introduced to effective giving by Just Impact. Further, do you think that Just Impact can take less advantage of communication with experts in other OPP cause areas (which could create important leverages) when it is an independent organization?
I appreciate the response here, but would flag that this came off, to me, as a bit mean-spirited.
One specific part:
> If you think that not trusting you is good, because you are liable to certain suboptimal mechanisms established early on, then are you acknowledging that your recommendations are suboptimal? Where would you suggest that impact-focused donors in EA look?
1. He said “less trust”, not “not trust at all”. I took that to mean something like, “don’t place absolute reverence in our public messaging.”
2. I’m sure anyone reasonable would acknowledge that their recommendations are less than optimal.
3. “Where would you suggest that impact-focused donors in EA look” → There’s not one true source that you should only pay attention to. You should probably look at a diversity of sources, including OP’s work.
That makes sense, probably the solution.