I’ve noticed that a lot of the research papers related to artificial intelligence that I see folks citing are not peer reviewed. They tend to be research papers posted to arXiv, papers produced by a company/organization, or otherwise papers that haven’t been reviewed and published in respected/mainstream academic journals.
Is this a concern? I know that there are plenty of problems with the system of academic publishing, but are non-peer reviewed papers fine?
Reasons my gut feeling might be wrong here:
many I’m overly focusing with a sort of status quo bias, overly concerned about anything different from the standard system.
maybe experts in the area find these papers to be of acceptable quality.
maybe the handful of papers I’ve seen outside of traditional peer review aren’t representative, suggesting a sort of availability bias, and actually the vast majority of new AI-relevant papers that people care about are really published in top journals. I’m just browsing the internet, so maybe if I were a researcher in this area speaking with other researchers I would have a better sense of what is actually meaningful.
maybe artificial intelligence is an area where peer review doesn’t matter as much, as results can be easily replicated (unlike, say, a history paper, where maybe you didn’t have access to the same archive or field site as the paper’s author did).
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
My understanding is that peer review is somewhat less common in computer science fields because research is often published in conference proceedings without extensive peer review. Of course, you could say that the conference itself is doing the vetting here, and computer science often has the advantage of easy replication by running the supplied code. This applies to some of the papers people are providing… but certainly not all of them.
Peer review is far from perfect, but if something isn’t peer reviewed I won’t fully trust it unless it’s gone through an equivalent amount of vetting by other means. I mean, I won’t fully trust a paper that has gone through external peer review, so I certainly won’t immediately trust something that has gone through nothing.
I’m working on an article about this, but I consider the lack of sufficient vetting to be one of the biggest epistemological problems in EA.
Actually, computer science conferences are peer reviewed. They play a similar role as journals in other fields. I think it’s just a historical curiosity that it’s conferences rather than journals that are the prestigious places to publish in CS!
Of course, this doesn’t change the overall picture of some AI work and much AI safety work not being peer reviewed.
I’ve noticed that a lot of the research papers related to artificial intelligence that I see folks citing are not peer reviewed. They tend to be research papers posted to arXiv, papers produced by a company/organization, or otherwise papers that haven’t been reviewed and published in respected/mainstream academic journals.
Is this a concern? I know that there are plenty of problems with the system of academic publishing, but are non-peer reviewed papers fine?
Reasons my gut feeling might be wrong here:
many I’m overly focusing with a sort of status quo bias, overly concerned about anything different from the standard system.
maybe experts in the area find these papers to be of acceptable quality.
maybe the handful of papers I’ve seen outside of traditional peer review aren’t representative, suggesting a sort of availability bias, and actually the vast majority of new AI-relevant papers that people care about are really published in top journals. I’m just browsing the internet, so maybe if I were a researcher in this area speaking with other researchers I would have a better sense of what is actually meaningful.
maybe artificial intelligence is an area where peer review doesn’t matter as much, as results can be easily replicated (unlike, say, a history paper, where maybe you didn’t have access to the same archive or field site as the paper’s author did).
I work in AI. Most papers, in peer reviewed venues or not, are awful. Some, in both categories, are good. Knowing whether a work is peer reviewed or not is weak evidence of quality, since so many good researchers think peer review is dumb and don’t bother (especially in safety). Eg I would generally consider eg “comes from a reputable industry lab” to be somewhat stronger evidence. Imo the reason “was it peer reviewed” is a useful signal in some fields is largely because the best researchers try to get their work peer reviewed, so not being peer reviewed is strong evidence of incompetence. That’s not the case in AI
So, it’s an issue, but in the same way that all citations are problematic if you can’t check them yourself/trust the authors to do due diligence
My understanding is that peer review is somewhat less common in computer science fields because research is often published in conference proceedings without extensive peer review. Of course, you could say that the conference itself is doing the vetting here, and computer science often has the advantage of easy replication by running the supplied code. This applies to some of the papers people are providing… but certainly not all of them.
Peer review is far from perfect, but if something isn’t peer reviewed I won’t fully trust it unless it’s gone through an equivalent amount of vetting by other means. I mean, I won’t fully trust a paper that has gone through external peer review, so I certainly won’t immediately trust something that has gone through nothing.
I’m working on an article about this, but I consider the lack of sufficient vetting to be one of the biggest epistemological problems in EA.
Actually, computer science conferences are peer reviewed. They play a similar role as journals in other fields. I think it’s just a historical curiosity that it’s conferences rather than journals that are the prestigious places to publish in CS!
Of course, this doesn’t change the overall picture of some AI work and much AI safety work not being peer reviewed.