Disclosure: I worked with Open Phil’s CJR team for ~4 months in 2020-2021 and was in touch with them for ~6 months before that.
I’m very concerned by the way this post blends speculative personal attacks with legitimate cost effectiveness questions.
Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities. If it were easy to cross that bar, these charities would not be the gold standard for neartermist, human-focused giving. Open Phil chose to bet on CJR as a cause area, conduct a search, and hire Chloe anyway.
I genuinely believe policy- and politics-focused EAs could learn a lot from the CJR team’s movement building work. Their strengths in political coordination and movement strategy are underrepresented in EA.
I bought the idea that we could synthesize knowledge from different fields and coordinate to solve the world’s most pressing problems. That won’t happen if we can’t respectfully engage with people who think or work differently from the community baseline.
We can’t significantly improve the world without asking hard questions. We can ask hard questions without dismissing others or assuming that difference implies inferiority.
[I only got back on the forum to reply to this post.]
So here is the thing, Chloe and her team’s virtues and flaws are amplified by virtue of them being in charge of millions. And so I think that having good models here requires mixing speculative judgments about personal character with cost-effectiveness estimates.
At this point I can either:
Not develop good models of the world
Develop ¿good? models but not share them
Develop them and share them
Ultimately I went with option 3, though I stayed roughly three months in option 2. It’s possible this wasn’t optimal. I think the deciding factor here was having two cost-effectiveness estimates which ranged over 2-3 orders of magnitude and yet were non-overlapping. But I could just have published those alone. But I don’t think they can stand alone, because the immediate answer is that Open Philanthropy knows something I don’t, and so the rest of the post is in part an exploration of whether that’s the case.
Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities. If it were easy to cross that bar, these charities would not be the gold standard for neartermist, human-focused giving. Open Phil chose to bet on CJR as a cause area, conduct a search, and hire Chloe anyway.
I don’t disagree with the meat of this paragraph. Though note that Jesse Rothman is not working on criminal justice reform any more, I think (see the CEA teams page)
I genuinely believe policy- and politics-focused EAs could learn a lot from the CJR team’s movement building work. Their strengths in political coordination and movement strategy are underrepresented in EA.
I imagine this is one of the reasons why CEA hired Jesse Rothman/Jesse Rothman chose to be hired to work on EA groups.
I bought the idea that we could synthesize knowledge from different fields and coordinate to solve the world’s most pressing problems. That won’t happen if we can’t respectfully engage with people who think or work differently from the community baseline.
We can’t significantly improve the world without asking hard questions. We can ask hard questions without dismissing others or assuming that difference implies inferiority.
Yes, but sometimes you can’t answer the hard questions without being really unflattering. For instance, assume for a moment that my cost effectiveness estimates are roughly correct. Then there were moments where Chloe could have taken the step of saying “you know what, actually donating $50M to GiveDirectly or to something else would be more effective than continuing my giving through JustImpact”. This would have been pretty heroic, and the fact that she failed to be heroic is at least a bit unflattering.
I’m not sure how this translates to your “assuming inferiority” framing. People routinely fail to be heroic. Maybe it’s too harsh and unattainable a standard. On the other hand, maybe holding people and organizations to that standard will help them become stronger, if they want to. I think that’s what I implicitly believe.
it seems plausible to me that this relative lack of quantitative inclination played a role in Open Philanthropy making comparatively suboptimal grants in the criminal justice space
I read this as a formal and softened way of saying “Chloe made avoidably bad grants because she wouldn’t do the math”. Different people will interpret the softening differently: it can come across either as “hey maybe this could have been a piece of what happened?” or “this is totally what I think happened, but if I say it bluntly that would be rude”.
Yeah, well, I haven’t thought about this case much, so maybe there’s some good counterargument, but I think of personal attacks as “this person’s hair looks ugly” or “this person isn’t fun at parties”, not “this person is not strong in an area of the job that I think is key”. Professional criticism seems quite different from personal attacks, and I hold different norms around how appropriate it is to bring up in public contexts.
Sure, it’s a challenge to someone to be professionally criticized, and can easily be unpleasant, but it’s not irrelevant or off-topic and can easily be quite valuable and important.
Thanks for the comment! I think it’s really useful to hear concerns and have public discussions about them.
As stated earlier, this post went through a few rounds of revisions. We’re looking to strike balances between publishing useful evaluative takes while not being disrespectful or personally upsetting.
I think it’s very easy to go too far on either side of this. It’s very easy to not upset anyone, but also not say anything, for instance.
We’re still trying to find the best balances, as well as finding the best ways to achieve both candor and little offense.
I’m sorry that this came of as having personal attacks.
> Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities
Maybe the disagreement is partially in the framing? I think this post was agreeing with you that it doesn’t seem to match the (incredibly high) bar of GiveWell top charities. I think many people came at this thinking that maybe criminal justice did meet that bar, so this post was mostly about flagging that in retrospect, it didn’t seem to.
For what it’s worth, I’d definitely agree that it is incredibly difficult to meet that bar. There are lots of incredibly competent people who couldn’t do that.
If you would have recommendations for ways this post and future evaluations could improve, I’d of course be very curious.
Thanks for your response and sorry for my lag. I can’t go into program details due to confidentiality obligations (though I’d be happy to contribute to a writeup if folks at Open Phil are interested), but I can say that I spent a lot of time in the available national and local data trying to make a quantitative EA case for the CJR program. I won’t get into that on this post, but I still think the program was worthwhile for less intuitive reasons.
On the personal comments:
I think this post’s characterization of Chloe and OP, particularly of their motivations, is unfair. The CJR field has gotten a lot of criticism in other EA spaces for being more social justice oriented and explicitly political. Some critiques of the field are warranted (similar to critiques of ineffective humanitarian & health interventions) but I think OP avoided these traps better than many donors. The team funded bipartisan efforts and focused on building the infrastructure needed to accelerate and sustain a new movement. Incarceration in the US exploded in the ’70s as the result of bipartisan action. The assumption that the right coalition of interests could force similarly rapid change in the opposite direction is fair, especially when analyzed against case studies of other social movements. It falls in line with a hits-based giving strategy.
Why I think the program was worthwhile:
The strategic investments made by the CJR team set the agenda for a field that barely existed in 2015 but, by 2021, had hundreds of millions of dollars in outside commitments from major funders and sympathetic officials elected across the US. Bridgespan (a data-focused social impact consulting group incubated at Bain) has used Open Phil grantees’ work to advise foundations, philanthropists, and nonprofits across the political spectrum on their own CJR giving. I’ve met some of the folks who worked on Bridgespan’s CJR analysis. I trust their epistemics and quantitative skills.
I don’t think we’ve seen the CJR movement through to the point where we could do a reliable postmortem on consequences. I’ve seen enough to say that OP’s team has mastered some very efficient methods for driving political will and building popular support.
OP’s CJR work could be particularly valuable as a replicable model for other movement building efforts. If nothing else, dissecting the program from that lens could be a really productive conversation.
Other notes
I disagreed with the CJR team on *a lot*. But they’re good people who were working within a framework that got vetted by OP years ago. And they’re great at what they do. I don’t think speculating on internal motivations is helpful. That said, I would wholeheartedly support a postmortem focused on program outcomes.
I came to the US scene from the UK and was very surprised by the divide (animosity) between SJ-aligned and EA-aligned work. I ended up disengaging with both for a while. I’m grateful for the wonderful Oxford folks for reminding me why I got involved in EA the first place.
Sitting at a table full of people with very different backgrounds / skill sets / communication styles requires incredible amounts of humility on all sides. I actively seek out opportunities to learn from people who disagree with me, but I’ve missed out on some incredible learning opportunities because I failed at this.
Thanks so much for sharing that, it adds a lot of context to the conversation.
I really, really hope this post doesn’t act anything like “the last word” on this topic. This post was Nuno doing his best with only a few weeks of research based on publicly-accessible information (which is fairly sparse, and I could understand why). The main thing he was focused on was simple cost-effectiveness estimation of the key parameters, compared to GiveWell top charities, which I agree is a very high bar.
I agree work on this scale really could use dramatically more comprehensive analysis, especially if other funders are likely to continue funding effectiveness-maximizing work here.
One small point: I read this analysis much more as suggesting that “CJR is really tough to make effective compared to top GiveWell charities, upon naive analyses” than anything like “the specific team involved did a poor job”.
I understood it as the combination of the 100x Multiplier discussed by Will MacAskill in Doing Good Better (referring to the idea that cash is 100x more valuable for somebody in extreme poverty than for someone in the global top 1%), and GiveWell’s current bar for funding set at 8x GiveDirectly. This would mean that Open Philanthropy targets donation opportunities that are at least 800x (or more like 1000x on average) more impactful than giving that money to a rich person.
Yeah, see this Open Philanthropy post. Or think about the difference in funding for an additional dollar to someone living on $500/year vs an additional dollar given to someone living on $50k, given log utility.
Disclosure: I worked with Open Phil’s CJR team for ~4 months in 2020-2021 and was in touch with them for ~6 months before that.
I’m very concerned by the way this post blends speculative personal attacks with legitimate cost effectiveness questions.
Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities. If it were easy to cross that bar, these charities would not be the gold standard for neartermist, human-focused giving. Open Phil chose to bet on CJR as a cause area, conduct a search, and hire Chloe anyway.
I genuinely believe policy- and politics-focused EAs could learn a lot from the CJR team’s movement building work. Their strengths in political coordination and movement strategy are underrepresented in EA.
I bought the idea that we could synthesize knowledge from different fields and coordinate to solve the world’s most pressing problems. That won’t happen if we can’t respectfully engage with people who think or work differently from the community baseline.
We can’t significantly improve the world without asking hard questions. We can ask hard questions without dismissing others or assuming that difference implies inferiority.
[I only got back on the forum to reply to this post.]
So here is the thing, Chloe and her team’s virtues and flaws are amplified by virtue of them being in charge of millions. And so I think that having good models here requires mixing speculative judgments about personal character with cost-effectiveness estimates.
At this point I can either:
Not develop good models of the world
Develop ¿good? models but not share them
Develop them and share them
Ultimately I went with option 3, though I stayed roughly three months in option 2. It’s possible this wasn’t optimal. I think the deciding factor here was having two cost-effectiveness estimates which ranged over 2-3 orders of magnitude and yet were non-overlapping. But I could just have published those alone. But I don’t think they can stand alone, because the immediate answer is that Open Philanthropy knows something I don’t, and so the rest of the post is in part an exploration of whether that’s the case.
I don’t disagree with the meat of this paragraph. Though note that Jesse Rothman is not working on criminal justice reform any more, I think (see the CEA teams page)
I imagine this is one of the reasons why CEA hired Jesse Rothman/Jesse Rothman chose to be hired to work on EA groups.
Yes, but sometimes you can’t answer the hard questions without being really unflattering. For instance, assume for a moment that my cost effectiveness estimates are roughly correct. Then there were moments where Chloe could have taken the step of saying “you know what, actually donating $50M to GiveDirectly or to something else would be more effective than continuing my giving through JustImpact”. This would have been pretty heroic, and the fact that she failed to be heroic is at least a bit unflattering.
I’m not sure how this translates to your “assuming inferiority” framing. People routinely fail to be heroic. Maybe it’s too harsh and unattainable a standard. On the other hand, maybe holding people and organizations to that standard will help them become stronger, if they want to. I think that’s what I implicitly believe.
Hi, can you give an example of a speculative personal attack in the post that you’re referring to?
How about:
I read this as a formal and softened way of saying “Chloe made avoidably bad grants because she wouldn’t do the math”. Different people will interpret the softening differently: it can come across either as “hey maybe this could have been a piece of what happened?” or “this is totally what I think happened, but if I say it bluntly that would be rude”.
Yeah, well, I haven’t thought about this case much, so maybe there’s some good counterargument, but I think of personal attacks as “this person’s hair looks ugly” or “this person isn’t fun at parties”, not “this person is not strong in an area of the job that I think is key”. Professional criticism seems quite different from personal attacks, and I hold different norms around how appropriate it is to bring up in public contexts.
Sure, it’s a challenge to someone to be professionally criticized, and can easily be unpleasant, but it’s not irrelevant or off-topic and can easily be quite valuable and important.
Can you give specific examples of this, which might help to communicate these advantages and support your comment?
Thanks for the comment! I think it’s really useful to hear concerns and have public discussions about them.
As stated earlier, this post went through a few rounds of revisions. We’re looking to strike balances between publishing useful evaluative takes while not being disrespectful or personally upsetting.
I think it’s very easy to go too far on either side of this. It’s very easy to not upset anyone, but also not say anything, for instance.
We’re still trying to find the best balances, as well as finding the best ways to achieve both candor and little offense.
I’m sorry that this came of as having personal attacks.
> Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities
Maybe the disagreement is partially in the framing? I think this post was agreeing with you that it doesn’t seem to match the (incredibly high) bar of GiveWell top charities. I think many people came at this thinking that maybe criminal justice did meet that bar, so this post was mostly about flagging that in retrospect, it didn’t seem to.
For what it’s worth, I’d definitely agree that it is incredibly difficult to meet that bar. There are lots of incredibly competent people who couldn’t do that.
If you would have recommendations for ways this post and future evaluations could improve, I’d of course be very curious.
Hi there -
Thanks for your response and sorry for my lag. I can’t go into program details due to confidentiality obligations (though I’d be happy to contribute to a writeup if folks at Open Phil are interested), but I can say that I spent a lot of time in the available national and local data trying to make a quantitative EA case for the CJR program. I won’t get into that on this post, but I still think the program was worthwhile for less intuitive reasons.
On the personal comments:
I think this post’s characterization of Chloe and OP, particularly of their motivations, is unfair. The CJR field has gotten a lot of criticism in other EA spaces for being more social justice oriented and explicitly political. Some critiques of the field are warranted (similar to critiques of ineffective humanitarian & health interventions) but I think OP avoided these traps better than many donors. The team funded bipartisan efforts and focused on building the infrastructure needed to accelerate and sustain a new movement. Incarceration in the US exploded in the ’70s as the result of bipartisan action. The assumption that the right coalition of interests could force similarly rapid change in the opposite direction is fair, especially when analyzed against case studies of other social movements. It falls in line with a hits-based giving strategy.
Why I think the program was worthwhile:
The strategic investments made by the CJR team set the agenda for a field that barely existed in 2015 but, by 2021, had hundreds of millions of dollars in outside commitments from major funders and sympathetic officials elected across the US. Bridgespan (a data-focused social impact consulting group incubated at Bain) has used Open Phil grantees’ work to advise foundations, philanthropists, and nonprofits across the political spectrum on their own CJR giving. I’ve met some of the folks who worked on Bridgespan’s CJR analysis. I trust their epistemics and quantitative skills.
I don’t think we’ve seen the CJR movement through to the point where we could do a reliable postmortem on consequences. I’ve seen enough to say that OP’s team has mastered some very efficient methods for driving political will and building popular support.
OP’s CJR work could be particularly valuable as a replicable model for other movement building efforts. If nothing else, dissecting the program from that lens could be a really productive conversation.
Other notes
I disagreed with the CJR team on *a lot*. But they’re good people who were working within a framework that got vetted by OP years ago. And they’re great at what they do. I don’t think speculating on internal motivations is helpful. That said, I would wholeheartedly support a postmortem focused on program outcomes.
I came to the US scene from the UK and was very surprised by the divide (animosity) between SJ-aligned and EA-aligned work. I ended up disengaging with both for a while. I’m grateful for the wonderful Oxford folks for reminding me why I got involved in EA the first place.
Sitting at a table full of people with very different backgrounds / skill sets / communication styles requires incredible amounts of humility on all sides. I actively seek out opportunities to learn from people who disagree with me, but I’ve missed out on some incredible learning opportunities because I failed at this.
Thanks so much for sharing that, it adds a lot of context to the conversation.
I really, really hope this post doesn’t act anything like “the last word” on this topic. This post was Nuno doing his best with only a few weeks of research based on publicly-accessible information (which is fairly sparse, and I could understand why). The main thing he was focused on was simple cost-effectiveness estimation of the key parameters, compared to GiveWell top charities, which I agree is a very high bar.
I agree work on this scale really could use dramatically more comprehensive analysis, especially if other funders are likely to continue funding effectiveness-maximizing work here.
One small point: I read this analysis much more as suggesting that “CJR is really tough to make effective compared to top GiveWell charities, upon naive analyses” than anything like “the specific team involved did a poor job”.
Does “1000x” refer to something in particular, or are you just saying that the GiveWell top charities set a high bar?
I understood it as the combination of the 100x Multiplier discussed by Will MacAskill in Doing Good Better (referring to the idea that cash is 100x more valuable for somebody in extreme poverty than for someone in the global top 1%), and GiveWell’s current bar for funding set at 8x GiveDirectly. This would mean that Open Philanthropy targets donation opportunities that are at least 800x (or more like 1000x on average) more impactful than giving that money to a rich person.
Yeah, see this Open Philanthropy post. Or think about the difference in funding for an additional dollar to someone living on $500/year vs an additional dollar given to someone living on $50k, given log utility.