Thanks for the thoughts. Note that I’m trying to engage/report here because we’re working hard to make our evaluations visible and impactful, and this forum seems like one of the most promising interested audiences. But also eager to hear about other opportunities to promote and get engagement with this evaluation work, particularly in non-EA academic and policy circles.
I generally aim to just summarize and synthesize what the evaluators had written and the authors’ response, bringing in what seemed like some specific relevant examples, and using quotes or paraphrases where possible. I generally didn’t give these as my opinions but rather, the author and the evaluators’. Although I did specifically give ‘my take’ in a few parts. If I recall my motivation I was trying to make this a little bit less dry to get a bit more engagement within this forum. But maybe that was a mistake.
And to this I added an opportunity to discuss the potential value of doing and supporting rigorous, ambitious, and ‘living/updated’ meta-analysis here and in EA-adjacent areas. I think your response was helpful there, as was the authors. I’d like to see others’ takes
Some clarifications:
The i4replication groups does put out replication papers/reports in each case and submits these to journals, and reports on this outcome on social media . But IIRC they only ‘weigh in’ centrally when they find a strong case suggesting systematic issues/retractions.
Note that their replications are not ‘opt-in’: they aimed to replicate every paper coming out in a set of ‘top journals’. (And now, they are moving towards a research focusing on a set of global issues like deforestation, but still not opt-in).
I’m not sure what works for them would work for us, though. It’s a different exercise. I don’t see an easy route towards our evaluations getting attention through ‘submitting them to journals’ (which naturally, would also be a bit counter to our core mission of moving research output and rewards away from the ‘journal publication as a static output.)
Also: I wouldn’t characterize this post as ‘editor commentary’, and I don’t think I have a lot of clout here. Also note that typical peer review is both anonymous and never made public. We’re making all our evaluations public, but the evaluators have the option to remain anonymous.
But your point about a higher-bar is well taken. I’ll keep this under consideration.
Thanks for the thoughts. Note that I’m trying to engage/report here because we’re working hard to make our evaluations visible and impactful, and this forum seems like one of the most promising interested audiences. But also eager to hear about other opportunities to promote and get engagement with this evaluation work, particularly in non-EA academic and policy circles.
I generally aim to just summarize and synthesize what the evaluators had written and the authors’ response, bringing in what seemed like some specific relevant examples, and using quotes or paraphrases where possible. I generally didn’t give these as my opinions but rather, the author and the evaluators’. Although I did specifically give ‘my take’ in a few parts. If I recall my motivation I was trying to make this a little bit less dry to get a bit more engagement within this forum. But maybe that was a mistake.
And to this I added an opportunity to discuss the potential value of doing and supporting rigorous, ambitious, and ‘living/updated’ meta-analysis here and in EA-adjacent areas. I think your response was helpful there, as was the authors. I’d like to see others’ takes
Some clarifications:
The i4replication groups does put out replication papers/reports in each case and submits these to journals, and reports on this outcome on social media . But IIRC they only ‘weigh in’ centrally when they find a strong case suggesting systematic issues/retractions.
Note that their replications are not ‘opt-in’: they aimed to replicate every paper coming out in a set of ‘top journals’. (And now, they are moving towards a research focusing on a set of global issues like deforestation, but still not opt-in).
I’m not sure what works for them would work for us, though. It’s a different exercise. I don’t see an easy route towards our evaluations getting attention through ‘submitting them to journals’ (which naturally, would also be a bit counter to our core mission of moving research output and rewards away from the ‘journal publication as a static output.)
Also: I wouldn’t characterize this post as ‘editor commentary’, and I don’t think I have a lot of clout here. Also note that typical peer review is both anonymous and never made public. We’re making all our evaluations public, but the evaluators have the option to remain anonymous.
But your point about a higher-bar is well taken. I’ll keep this under consideration.