A final reflective note: David, I want to encourage you to think about the optics/politics of this exchange from the point of view of prospective Unjornal participants/authors.
I appreciate the feedback. I’m definitely aware that we want to make this attractive to authors and others, both to submit their work and to engage with our evaluations. Note that in addition to asking for author submissions, our team nominates and prioritizes high-profile and potential-high-impact work, and contact authors to get their updates, suggestions, and (later) responses. (We generally only require author permission to do these evaluations from early-career authors at a sensitive point in their career.) We are grateful to you for having responded to these evaluations.
There are no incentives to participate.
I would disagree with this. We previously had author prizes (financial and reputational) focusing on authors who submitted work for our evaluation. although these prizes are not currently active. I’m keen to revise these prizes when the situation permits (funding and partners).
But there are a range of other incentives (not directly financial) for authors to submit their work, respond to evaluations and engage in other ways. I provide a detailed author FAQ here. This includes getting constructive feedback, signaling your confidence in your paper and openness to criticism, the potential for highly positive evaluations to help your paper’s reputation, visibility, unlocking impact and grants, and more. (Our goal is that these evaluations will ultimately become the object of value in and of themselves, replacing “publication in a journal” for research credibility and career rewards. But I admit that’s a long path.)
I did it because I thought it would be fun ad I was wondering if anyone would have ideas or extensions that improved the paper. Instead, I got some rather harsh criticisms implying we should have written a totally different paper.
I would not characterize the evaluators’ reports in this way. Yes, there was some negative-leaning language, which, as you know, we encourage the evaluators to tone down. But there were a range of suggestions (especially from Jané) which I see as constructive, detailed, and useful, both for this paper and for your future work. And I don’t see this as them suggesting “a totally different paper.” To large extent they agreed with the importance of this project, with the data collected, and with many of your approaches. They praised your transparency. They suggested some different methods for transforming and analyzing the data and interpreting the results.
Then I got this essay, which was unexpected/unannounced and used, again, rather harsh language to which I objected. Do you think this exchange looks like an appealing experience to others? I’d say the answer is probably not.
I think it’s important to communicate the results of our evaluations to wider audiences, and not only on our own platform. As I mentioned, I tried to fairly categorize your paper, the nature of the evaluations, and your response. I’ve adjusted my post above in response to some of your points where there was a case to be made that I was using loaded language, etc.
Would you recommend that I share any such posts with both the authors and the evaluators before making them? It’s a genuine question (to you and to anyone else reading these comments) - I’m not sure the correct answer.
As to your suggestion at the bottom, I will read and consider it more carefully—it sounds good.
Aside: I’m still concerned with the connotation of replication, extension, and robustness checking being something that should be relegated to graduate students and not. This seems to diminish the value and prestige of work that I believe to be of the highest order practical value for important decisions in the animal welfare space and beyond.
In the replication/robustness checking domain, I think what i4replication.org is doing is excellent. They’re working with both graduate students and everyone from graduate students to senior professors to do this work and treating this as a high-value output meriting direct career rewards. I believe they encourage the replicators to be fair – excessively conciliatory nor harsh, and focus on the methodology. We are in contact with i4replication.org and hoping to work with them more closely, with our evaluations and “evaluation games” offering grounded suggestions for robustness replication checks.
Would you recommend that I share any such posts with both the authors and the evaluators before making them?
Yes. But zooming back out, I don’t know if these EA Forum posts are necessary.
A practice I saw i4replication (or some other replication lab) is that the editors didn’t provide any “value-added” commentary on any given paper. At least, I didn’t see these in any tweets they did. They link to the evaluation reports + a response from the author and then leave it at that.
Once in a while, there will be a retrospective on how the replications are going as a whole. But I think they refrain from commenting on any paper.
If I had to rationalize why they did that, my guess is that replications are already an opt-in thing with lots of downside. And psychologically, editor commentary has a lot more potential for unpleasantness. Peer review tends to be anonymous so it doesn’t feel as personal because the critics are kept secret. But editor commentary isn’t secret...actually feels personal, and editors tend to have more clout.
Basically, I think the bar for an editor commentary post like this should be even higher than the usual process. And the usual evaluation process already allows for author review and response. So I think a “value-added” post like this should pass a higher bar of diplomacy and insight.
Thanks for the thoughts. Note that I’m trying to engage/report here because we’re working hard to make our evaluations visible and impactful, and this forum seems like one of the most promising interested audiences. But also eager to hear about other opportunities to promote and get engagement with this evaluation work, particularly in non-EA academic and policy circles.
I generally aim to just summarize and synthesize what the evaluators had written and the authors’ response, bringing in what seemed like some specific relevant examples, and using quotes or paraphrases where possible. I generally didn’t give these as my opinions but rather, the author and the evaluators’. Although I did specifically give ‘my take’ in a few parts. If I recall my motivation I was trying to make this a little bit less dry to get a bit more engagement within this forum. But maybe that was a mistake.
And to this I added an opportunity to discuss the potential value of doing and supporting rigorous, ambitious, and ‘living/updated’ meta-analysis here and in EA-adjacent areas. I think your response was helpful there, as was the authors. I’d like to see others’ takes
Some clarifications:
The i4replication groups does put out replication papers/reports in each case and submits these to journals, and reports on this outcome on social media . But IIRC they only ‘weigh in’ centrally when they find a strong case suggesting systematic issues/retractions.
Note that their replications are not ‘opt-in’: they aimed to replicate every paper coming out in a set of ‘top journals’. (And now, they are moving towards a research focusing on a set of global issues like deforestation, but still not opt-in).
I’m not sure what works for them would work for us, though. It’s a different exercise. I don’t see an easy route towards our evaluations getting attention through ‘submitting them to journals’ (which naturally, would also be a bit counter to our core mission of moving research output and rewards away from the ‘journal publication as a static output.)
Also: I wouldn’t characterize this post as ‘editor commentary’, and I don’t think I have a lot of clout here. Also note that typical peer review is both anonymous and never made public. We’re making all our evaluations public, but the evaluators have the option to remain anonymous.
But your point about a higher-bar is well taken. I’ll keep this under consideration.
I appreciate the feedback. I’m definitely aware that we want to make this attractive to authors and others, both to submit their work and to engage with our evaluations. Note that in addition to asking for author submissions, our team nominates and prioritizes high-profile and potential-high-impact work, and contact authors to get their updates, suggestions, and (later) responses. (We generally only require author permission to do these evaluations from early-career authors at a sensitive point in their career.) We are grateful to you for having responded to these evaluations.
I would disagree with this. We previously had author prizes (financial and reputational) focusing on authors who submitted work for our evaluation. although these prizes are not currently active. I’m keen to revise these prizes when the situation permits (funding and partners).
But there are a range of other incentives (not directly financial) for authors to submit their work, respond to evaluations and engage in other ways. I provide a detailed author FAQ here. This includes getting constructive feedback, signaling your confidence in your paper and openness to criticism, the potential for highly positive evaluations to help your paper’s reputation, visibility, unlocking impact and grants, and more. (Our goal is that these evaluations will ultimately become the object of value in and of themselves, replacing “publication in a journal” for research credibility and career rewards. But I admit that’s a long path.)
I would not characterize the evaluators’ reports in this way. Yes, there was some negative-leaning language, which, as you know, we encourage the evaluators to tone down. But there were a range of suggestions (especially from Jané) which I see as constructive, detailed, and useful, both for this paper and for your future work. And I don’t see this as them suggesting “a totally different paper.” To large extent they agreed with the importance of this project, with the data collected, and with many of your approaches. They praised your transparency. They suggested some different methods for transforming and analyzing the data and interpreting the results.
I think it’s important to communicate the results of our evaluations to wider audiences, and not only on our own platform. As I mentioned, I tried to fairly categorize your paper, the nature of the evaluations, and your response. I’ve adjusted my post above in response to some of your points where there was a case to be made that I was using loaded language, etc.
Would you recommend that I share any such posts with both the authors and the evaluators before making them? It’s a genuine question (to you and to anyone else reading these comments) - I’m not sure the correct answer.
As to your suggestion at the bottom, I will read and consider it more carefully—it sounds good.
Aside: I’m still concerned with the connotation of replication, extension, and robustness checking being something that should be relegated to graduate students and not. This seems to diminish the value and prestige of work that I believe to be of the highest order practical value for important decisions in the animal welfare space and beyond.
In the replication/robustness checking domain, I think what i4replication.org is doing is excellent. They’re working with both graduate students and everyone from graduate students to senior professors to do this work and treating this as a high-value output meriting direct career rewards. I believe they encourage the replicators to be fair – excessively conciliatory nor harsh, and focus on the methodology. We are in contact with i4replication.org and hoping to work with them more closely, with our evaluations and “evaluation games” offering grounded suggestions for robustness replication checks.
Yes. But zooming back out, I don’t know if these EA Forum posts are necessary.
A practice I saw i4replication (or some other replication lab) is that the editors didn’t provide any “value-added” commentary on any given paper. At least, I didn’t see these in any tweets they did. They link to the evaluation reports + a response from the author and then leave it at that.
Once in a while, there will be a retrospective on how the replications are going as a whole. But I think they refrain from commenting on any paper.
If I had to rationalize why they did that, my guess is that replications are already an opt-in thing with lots of downside. And psychologically, editor commentary has a lot more potential for unpleasantness. Peer review tends to be anonymous so it doesn’t feel as personal because the critics are kept secret. But editor commentary isn’t secret...actually feels personal, and editors tend to have more clout.
Basically, I think the bar for an editor commentary post like this should be even higher than the usual process. And the usual evaluation process already allows for author review and response. So I think a “value-added” post like this should pass a higher bar of diplomacy and insight.
Thanks for the thoughts. Note that I’m trying to engage/report here because we’re working hard to make our evaluations visible and impactful, and this forum seems like one of the most promising interested audiences. But also eager to hear about other opportunities to promote and get engagement with this evaluation work, particularly in non-EA academic and policy circles.
I generally aim to just summarize and synthesize what the evaluators had written and the authors’ response, bringing in what seemed like some specific relevant examples, and using quotes or paraphrases where possible. I generally didn’t give these as my opinions but rather, the author and the evaluators’. Although I did specifically give ‘my take’ in a few parts. If I recall my motivation I was trying to make this a little bit less dry to get a bit more engagement within this forum. But maybe that was a mistake.
And to this I added an opportunity to discuss the potential value of doing and supporting rigorous, ambitious, and ‘living/updated’ meta-analysis here and in EA-adjacent areas. I think your response was helpful there, as was the authors. I’d like to see others’ takes
Some clarifications:
The i4replication groups does put out replication papers/reports in each case and submits these to journals, and reports on this outcome on social media . But IIRC they only ‘weigh in’ centrally when they find a strong case suggesting systematic issues/retractions.
Note that their replications are not ‘opt-in’: they aimed to replicate every paper coming out in a set of ‘top journals’. (And now, they are moving towards a research focusing on a set of global issues like deforestation, but still not opt-in).
I’m not sure what works for them would work for us, though. It’s a different exercise. I don’t see an easy route towards our evaluations getting attention through ‘submitting them to journals’ (which naturally, would also be a bit counter to our core mission of moving research output and rewards away from the ‘journal publication as a static output.)
Also: I wouldn’t characterize this post as ‘editor commentary’, and I don’t think I have a lot of clout here. Also note that typical peer review is both anonymous and never made public. We’re making all our evaluations public, but the evaluators have the option to remain anonymous.
But your point about a higher-bar is well taken. I’ll keep this under consideration.