Excellent post. Could you expand further on your point:
“I think the AI (or perhaps Computer Science) research community is doing a great job (much better than in other areas) at innovating a lot on different peer review systems”
It would be interesting to see how things are done differently in these fields. Even a link to other resources would be great. Thanks.
In Physics (my field) things look much lamer. To start with, we only publish in Journals, which might be ok, but means an ever-larger reviewing process length. Single-blind is still widely used. Sharing the code is fully optional (just say that you’ll provide it upon reasonable request). And there are often just 2 (or even 1) referees, if you’re lucky you may go up to 4.
But the problem is the lack of assessment of the reviewing process: I don’t think they are trying to make any efforts to improve it except to “look good” (open access stuff, maybe double-blind)… Since we do not conduct experiments or try to improve it, it stays behind
I’d bet that in other sciences it is even worse: chemists and biologists are not even used to using arxiv equivalent. Social sciences… social sciences is unclear to me, but seems probably worse (p-value tweaking...). 😅
Excellent post. Could you expand further on your point:
“I think the AI (or perhaps Computer Science) research community is doing a great job (much better than in other areas) at innovating a lot on different peer review systems”
It would be interesting to see how things are done differently in these fields. Even a link to other resources would be great. Thanks.
Hey Eric, Thanks! I think that in AI conferences, organizers have played around with a few things:
Some conferences have a two-phase review system (AAAI) , others only one (NeurIPS).
Sometimes the chair might read and discard papers beforehand.
Reviews are sometimes published in OpenReview so that everyone can see them.
Referees are asked to provide confidence ratings in their assessments. Etcetera (see for example https://blog.ml.cmu.edu/2020/12/01/icml2020exp/).
In Physics (my field) things look much lamer. To start with, we only publish in Journals, which might be ok, but means an ever-larger reviewing process length. Single-blind is still widely used. Sharing the code is fully optional (just say that you’ll provide it upon reasonable request). And there are often just 2 (or even 1) referees, if you’re lucky you may go up to 4. But the problem is the lack of assessment of the reviewing process: I don’t think they are trying to make any efforts to improve it except to “look good” (open access stuff, maybe double-blind)… Since we do not conduct experiments or try to improve it, it stays behind
I’d bet that in other sciences it is even worse: chemists and biologists are not even used to using arxiv equivalent. Social sciences… social sciences is unclear to me, but seems probably worse (p-value tweaking...). 😅