Would you recommend that I share any such posts with both the authors and the evaluators before making them?
Yes. But zooming back out, I don’t know if these EA Forum posts are necessary.
A practice I saw i4replication (or some other replication lab) is that the editors didn’t provide any “value-added” commentary on any given paper. At least, I didn’t see these in any tweets they did. They link to the evaluation reports + a response from the author and then leave it at that.
Once in a while, there will be a retrospective on how the replications are going as a whole. But I think they refrain from commenting on any paper.
If I had to rationalize why they did that, my guess is that replications are already an opt-in thing with lots of downside. And psychologically, editor commentary has a lot more potential for unpleasantness. Peer review tends to be anonymous so it doesn’t feel as personal because the critics are kept secret. But editor commentary isn’t secret...actually feels personal, and editors tend to have more clout.
Basically, I think the bar for an editor commentary post like this should be even higher than the usual process. And the usual evaluation process already allows for author review and response. So I think a “value-added” post like this should pass a higher bar of diplomacy and insight.
Thanks for the thoughts. Note that I’m trying to engage/report here because we’re working hard to make our evaluations visible and impactful, and this forum seems like one of the most promising interested audiences. But also eager to hear about other opportunities to promote and get engagement with this evaluation work, particularly in non-EA academic and policy circles.
I generally aim to just summarize and synthesize what the evaluators had written and the authors’ response, bringing in what seemed like some specific relevant examples, and using quotes or paraphrases where possible. I generally didn’t give these as my opinions but rather, the author and the evaluators’. Although I did specifically give ‘my take’ in a few parts. If I recall my motivation I was trying to make this a little bit less dry to get a bit more engagement within this forum. But maybe that was a mistake.
And to this I added an opportunity to discuss the potential value of doing and supporting rigorous, ambitious, and ‘living/updated’ meta-analysis here and in EA-adjacent areas. I think your response was helpful there, as was the authors. I’d like to see others’ takes
Some clarifications:
The i4replication groups does put out replication papers/reports in each case and submits these to journals, and reports on this outcome on social media . But IIRC they only ‘weigh in’ centrally when they find a strong case suggesting systematic issues/retractions.
Note that their replications are not ‘opt-in’: they aimed to replicate every paper coming out in a set of ‘top journals’. (And now, they are moving towards a research focusing on a set of global issues like deforestation, but still not opt-in).
I’m not sure what works for them would work for us, though. It’s a different exercise. I don’t see an easy route towards our evaluations getting attention through ‘submitting them to journals’ (which naturally, would also be a bit counter to our core mission of moving research output and rewards away from the ‘journal publication as a static output.)
Also: I wouldn’t characterize this post as ‘editor commentary’, and I don’t think I have a lot of clout here. Also note that typical peer review is both anonymous and never made public. We’re making all our evaluations public, but the evaluators have the option to remain anonymous.
But your point about a higher-bar is well taken. I’ll keep this under consideration.
Yes. But zooming back out, I don’t know if these EA Forum posts are necessary.
A practice I saw i4replication (or some other replication lab) is that the editors didn’t provide any “value-added” commentary on any given paper. At least, I didn’t see these in any tweets they did. They link to the evaluation reports + a response from the author and then leave it at that.
Once in a while, there will be a retrospective on how the replications are going as a whole. But I think they refrain from commenting on any paper.
If I had to rationalize why they did that, my guess is that replications are already an opt-in thing with lots of downside. And psychologically, editor commentary has a lot more potential for unpleasantness. Peer review tends to be anonymous so it doesn’t feel as personal because the critics are kept secret. But editor commentary isn’t secret...actually feels personal, and editors tend to have more clout.
Basically, I think the bar for an editor commentary post like this should be even higher than the usual process. And the usual evaluation process already allows for author review and response. So I think a “value-added” post like this should pass a higher bar of diplomacy and insight.
Thanks for the thoughts. Note that I’m trying to engage/report here because we’re working hard to make our evaluations visible and impactful, and this forum seems like one of the most promising interested audiences. But also eager to hear about other opportunities to promote and get engagement with this evaluation work, particularly in non-EA academic and policy circles.
I generally aim to just summarize and synthesize what the evaluators had written and the authors’ response, bringing in what seemed like some specific relevant examples, and using quotes or paraphrases where possible. I generally didn’t give these as my opinions but rather, the author and the evaluators’. Although I did specifically give ‘my take’ in a few parts. If I recall my motivation I was trying to make this a little bit less dry to get a bit more engagement within this forum. But maybe that was a mistake.
And to this I added an opportunity to discuss the potential value of doing and supporting rigorous, ambitious, and ‘living/updated’ meta-analysis here and in EA-adjacent areas. I think your response was helpful there, as was the authors. I’d like to see others’ takes
Some clarifications:
The i4replication groups does put out replication papers/reports in each case and submits these to journals, and reports on this outcome on social media . But IIRC they only ‘weigh in’ centrally when they find a strong case suggesting systematic issues/retractions.
Note that their replications are not ‘opt-in’: they aimed to replicate every paper coming out in a set of ‘top journals’. (And now, they are moving towards a research focusing on a set of global issues like deforestation, but still not opt-in).
I’m not sure what works for them would work for us, though. It’s a different exercise. I don’t see an easy route towards our evaluations getting attention through ‘submitting them to journals’ (which naturally, would also be a bit counter to our core mission of moving research output and rewards away from the ‘journal publication as a static output.)
Also: I wouldn’t characterize this post as ‘editor commentary’, and I don’t think I have a lot of clout here. Also note that typical peer review is both anonymous and never made public. We’re making all our evaluations public, but the evaluators have the option to remain anonymous.
But your point about a higher-bar is well taken. I’ll keep this under consideration.