Thanks for the comment! I think it’s really useful to hear concerns and have public discussions about them.
As stated earlier, this post went through a few rounds of revisions. We’re looking to strike balances between publishing useful evaluative takes while not being disrespectful or personally upsetting.
I think it’s very easy to go too far on either side of this. It’s very easy to not upset anyone, but also not say anything, for instance.
We’re still trying to find the best balances, as well as finding the best ways to achieve both candor and little offense.
I’m sorry that this came of as having personal attacks.
> Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities
Maybe the disagreement is partially in the framing? I think this post was agreeing with you that it doesn’t seem to match the (incredibly high) bar of GiveWell top charities. I think many people came at this thinking that maybe criminal justice did meet that bar, so this post was mostly about flagging that in retrospect, it didn’t seem to.
For what it’s worth, I’d definitely agree that it is incredibly difficult to meet that bar. There are lots of incredibly competent people who couldn’t do that.
If you would have recommendations for ways this post and future evaluations could improve, I’d of course be very curious.
Thanks for your response and sorry for my lag. I can’t go into program details due to confidentiality obligations (though I’d be happy to contribute to a writeup if folks at Open Phil are interested), but I can say that I spent a lot of time in the available national and local data trying to make a quantitative EA case for the CJR program. I won’t get into that on this post, but I still think the program was worthwhile for less intuitive reasons.
On the personal comments:
I think this post’s characterization of Chloe and OP, particularly of their motivations, is unfair. The CJR field has gotten a lot of criticism in other EA spaces for being more social justice oriented and explicitly political. Some critiques of the field are warranted (similar to critiques of ineffective humanitarian & health interventions) but I think OP avoided these traps better than many donors. The team funded bipartisan efforts and focused on building the infrastructure needed to accelerate and sustain a new movement. Incarceration in the US exploded in the ’70s as the result of bipartisan action. The assumption that the right coalition of interests could force similarly rapid change in the opposite direction is fair, especially when analyzed against case studies of other social movements. It falls in line with a hits-based giving strategy.
Why I think the program was worthwhile:
The strategic investments made by the CJR team set the agenda for a field that barely existed in 2015 but, by 2021, had hundreds of millions of dollars in outside commitments from major funders and sympathetic officials elected across the US. Bridgespan (a data-focused social impact consulting group incubated at Bain) has used Open Phil grantees’ work to advise foundations, philanthropists, and nonprofits across the political spectrum on their own CJR giving. I’ve met some of the folks who worked on Bridgespan’s CJR analysis. I trust their epistemics and quantitative skills.
I don’t think we’ve seen the CJR movement through to the point where we could do a reliable postmortem on consequences. I’ve seen enough to say that OP’s team has mastered some very efficient methods for driving political will and building popular support.
OP’s CJR work could be particularly valuable as a replicable model for other movement building efforts. If nothing else, dissecting the program from that lens could be a really productive conversation.
Other notes
I disagreed with the CJR team on *a lot*. But they’re good people who were working within a framework that got vetted by OP years ago. And they’re great at what they do. I don’t think speculating on internal motivations is helpful. That said, I would wholeheartedly support a postmortem focused on program outcomes.
I came to the US scene from the UK and was very surprised by the divide (animosity) between SJ-aligned and EA-aligned work. I ended up disengaging with both for a while. I’m grateful for the wonderful Oxford folks for reminding me why I got involved in EA the first place.
Sitting at a table full of people with very different backgrounds / skill sets / communication styles requires incredible amounts of humility on all sides. I actively seek out opportunities to learn from people who disagree with me, but I’ve missed out on some incredible learning opportunities because I failed at this.
Thanks so much for sharing that, it adds a lot of context to the conversation.
I really, really hope this post doesn’t act anything like “the last word” on this topic. This post was Nuno doing his best with only a few weeks of research based on publicly-accessible information (which is fairly sparse, and I could understand why). The main thing he was focused on was simple cost-effectiveness estimation of the key parameters, compared to GiveWell top charities, which I agree is a very high bar.
I agree work on this scale really could use dramatically more comprehensive analysis, especially if other funders are likely to continue funding effectiveness-maximizing work here.
One small point: I read this analysis much more as suggesting that “CJR is really tough to make effective compared to top GiveWell charities, upon naive analyses” than anything like “the specific team involved did a poor job”.
Thanks for the comment! I think it’s really useful to hear concerns and have public discussions about them.
As stated earlier, this post went through a few rounds of revisions. We’re looking to strike balances between publishing useful evaluative takes while not being disrespectful or personally upsetting.
I think it’s very easy to go too far on either side of this. It’s very easy to not upset anyone, but also not say anything, for instance.
We’re still trying to find the best balances, as well as finding the best ways to achieve both candor and little offense.
I’m sorry that this came of as having personal attacks.
> Chloe and Jesse are competent and committed people working in a cause area that does not meet the 1000x threshold currently set by GiveWell top charities
Maybe the disagreement is partially in the framing? I think this post was agreeing with you that it doesn’t seem to match the (incredibly high) bar of GiveWell top charities. I think many people came at this thinking that maybe criminal justice did meet that bar, so this post was mostly about flagging that in retrospect, it didn’t seem to.
For what it’s worth, I’d definitely agree that it is incredibly difficult to meet that bar. There are lots of incredibly competent people who couldn’t do that.
If you would have recommendations for ways this post and future evaluations could improve, I’d of course be very curious.
Hi there -
Thanks for your response and sorry for my lag. I can’t go into program details due to confidentiality obligations (though I’d be happy to contribute to a writeup if folks at Open Phil are interested), but I can say that I spent a lot of time in the available national and local data trying to make a quantitative EA case for the CJR program. I won’t get into that on this post, but I still think the program was worthwhile for less intuitive reasons.
On the personal comments:
I think this post’s characterization of Chloe and OP, particularly of their motivations, is unfair. The CJR field has gotten a lot of criticism in other EA spaces for being more social justice oriented and explicitly political. Some critiques of the field are warranted (similar to critiques of ineffective humanitarian & health interventions) but I think OP avoided these traps better than many donors. The team funded bipartisan efforts and focused on building the infrastructure needed to accelerate and sustain a new movement. Incarceration in the US exploded in the ’70s as the result of bipartisan action. The assumption that the right coalition of interests could force similarly rapid change in the opposite direction is fair, especially when analyzed against case studies of other social movements. It falls in line with a hits-based giving strategy.
Why I think the program was worthwhile:
The strategic investments made by the CJR team set the agenda for a field that barely existed in 2015 but, by 2021, had hundreds of millions of dollars in outside commitments from major funders and sympathetic officials elected across the US. Bridgespan (a data-focused social impact consulting group incubated at Bain) has used Open Phil grantees’ work to advise foundations, philanthropists, and nonprofits across the political spectrum on their own CJR giving. I’ve met some of the folks who worked on Bridgespan’s CJR analysis. I trust their epistemics and quantitative skills.
I don’t think we’ve seen the CJR movement through to the point where we could do a reliable postmortem on consequences. I’ve seen enough to say that OP’s team has mastered some very efficient methods for driving political will and building popular support.
OP’s CJR work could be particularly valuable as a replicable model for other movement building efforts. If nothing else, dissecting the program from that lens could be a really productive conversation.
Other notes
I disagreed with the CJR team on *a lot*. But they’re good people who were working within a framework that got vetted by OP years ago. And they’re great at what they do. I don’t think speculating on internal motivations is helpful. That said, I would wholeheartedly support a postmortem focused on program outcomes.
I came to the US scene from the UK and was very surprised by the divide (animosity) between SJ-aligned and EA-aligned work. I ended up disengaging with both for a while. I’m grateful for the wonderful Oxford folks for reminding me why I got involved in EA the first place.
Sitting at a table full of people with very different backgrounds / skill sets / communication styles requires incredible amounts of humility on all sides. I actively seek out opportunities to learn from people who disagree with me, but I’ve missed out on some incredible learning opportunities because I failed at this.
Thanks so much for sharing that, it adds a lot of context to the conversation.
I really, really hope this post doesn’t act anything like “the last word” on this topic. This post was Nuno doing his best with only a few weeks of research based on publicly-accessible information (which is fairly sparse, and I could understand why). The main thing he was focused on was simple cost-effectiveness estimation of the key parameters, compared to GiveWell top charities, which I agree is a very high bar.
I agree work on this scale really could use dramatically more comprehensive analysis, especially if other funders are likely to continue funding effectiveness-maximizing work here.
One small point: I read this analysis much more as suggesting that “CJR is really tough to make effective compared to top GiveWell charities, upon naive analyses” than anything like “the specific team involved did a poor job”.