I definitely agree with you that a field is not worthless just because the published figures are not reproducible. My assumption would be that even if it has value now, it could be a lot more valuable if reporting were more rigorous and transparent (and that potential increase in value would justify some serious effort to improve the rigorousness and transparency).
Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry? That got me thinking—if that were the case, would we actually need peer-reviewed publications at all for such a field? I’m thinking that the public would anyway rather read popular science articles, and that this could be produced with much less effort by science journalists? (Maybe I’m totally misunderstanding your point here, but if not I would be very curious to hear your take on such a model).
it could be a lot more valuable if reporting were more rigorous and transparent
Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?
Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry?
No, the purpose of publishing is not mainly to communicate to the public. After all, very few members of the public read scientific literature. The truth-seeking or engineering achievement the lab is aiming for is one thing. The experiments they run to get closer are another. And the descriptions of those experiments are a third thing. That third thing is what you get from the paper.
I find it useful at this early stage in my career because it helps me find labs doing work that’s of interest to me. Grantmakers and universities find them useful to decide who to give money to or who to hire. Publications show your work in a way that a letter of reference or a line on a resume just can’t. Fellow researchers find them useful to see who’s trying what approach to the phenomena of interest. Sometimes, an experiment and its writeup are so persuasive that they actually persuadesomebody that the universe works differently than they’d thought.
As you read more literature and speak with more scientists, you start to develop more of a sense of skepticism and of importance. What is the paper choosing to highlight, and what is it leaving out? Is the justification for this research really compelling, or is this just a hasty grab at a publication? Should I be impressed by this result?
It would be nice for the reader if papers were a crystal-clear guide for a novice to the field. Instead, you need a decent amount of sophistication with the field to know what to make of it all. Conversations with researchers can help a lot. Read their work and then ask if you can have 20 minutes of their time; they’ll often be happy to answer your questions.
And yes, fields do seem to go down dead ends from time to time. My guess is it’s some sort of self-reinforcing selection for biased, corrupt, gullible scientists who’ve come to depend on a cycle of hype-building to get the next grant. Homophilia attracts more people of the same stripe, and the field gets confused.
Tissue engineering is an example. 20-30 years ago, the scientists in that field hyped up the idea that we were chugging toward tissue-engineered solid organs. Didn’t pan out, at least not yet. And when I look at tissue engineering papers today, I fear the same thing might repeat itself. Now we have bioprinters and iPSCs to amuse ourselves with. On the other hand, maybe that’ll be enough to do the trick? Hard to know. Keep your skeptical hat on.
I think that we have a rather similar view actually—maybe it’s just the topic of the post that makes it seem like I am more pessimistic than I am? Even though this post focuses on mapping up problems in the research system, my point is not in any way that scientific research would be useless—rather the opposite, I think it is very valuable, and that is why I’m so interested in exploring if there are ways that it can be improved. It’s not at all my intention to say that research, or researchers, or any other people working in the system for that matter, are “bad”.
It would be nice for the reader if papers were a crystal-clear guide for a novice to the field. Instead, you need a decent amount of sophistication with the field to know what to make of it all.
My concern is not that published papers are not clear guides that a novice could follow or understand. Especially now that there is an active debate around reproducibility I would also not expect (good) researchers to be naive about it (and that has not at all been my personal experience from working with researchers). Still it seems to me that if reproducibility is lacking in fields that produce a lot of value, initiatives that would improve reproducibility would be very valuable?
Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?
From what I have seen so far, I think that the work by OSF (particularly on preregistration) and publications from METRICS seems like it could be impactful—what do you think of these? The ARRIVE guidelines also seem like a very valuable initiative for reporting of research with animals.
All these projects seem beneficial. I hadn’t heard of any of them, so thanks for pointing them out. It’s useful to frame this as “research on research,” in that it’s subject to the same challenges with reproducibility, and with aligning empirical data with theoretical predictions to develop a paradigm, as in any other field of science. Hence, I support the work, while being skeptical of whether such interventions will be useful and potent enough to make a positive change.
The reason I brought this up is that the conversation on improving the productivity of science seems to focus almost exclusively on problems with publishing and reproducibility, while neglecting the skill-building and internal-knowledge aspects of scientific research. Scientists seem to get a feel through their interactions with their colleagues for who is trustworthy and capable, and who is not. Without taking into account the sociology of science, it’s hard to know whether measures taken to address problems with publishing and reproducibility will be focusing on the mechanisms by which progress can best be accelerated.
Honest, hardworking academic STEM PIs seem to struggle with money and labor shortages. Why isn’t there more money flowing into academic scientific research? Why aren’t more people becoming scientists?
The lack of money in STEM academia seems to me a consequence of politics. Why is there political reluctance to fund academic science at higher levels? Is academia to blame for part of this reluctance, or is the reason purely external to academia? I don’t know the answers to these questions, but they seem important to address.
Why don’t more people strive to become academic STEM scientists? Partly, industry draws them away with better pay. Part of the fault lies in our school system, although I really don’t know what exactly we should change. And part of the fault is probably in our cultural attitudes toward STEM.
Many of the pro-reproducibility measures seem to assume that the fastest road to better science is to make more efficient use of what we already have. I would also like to see us figure out a way to produce more labor and capital in this industry. To be clear, I mean that I would like to see fewer people going into non-STEM fields—I am personally comfortable with viewing people’s decision to go into many non-STEM fields as a form of failure to achieve their potential. That failure isn’t necessarily their fault. It might be the fault of how we’ve set up our school, governance, cultural or economic system.
Thank you for this perspective, very interesting.
I definitely agree with you that a field is not worthless just because the published figures are not reproducible. My assumption would be that even if it has value now, it could be a lot more valuable if reporting were more rigorous and transparent (and that potential increase in value would justify some serious effort to improve the rigorousness and transparency).
Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry?
That got me thinking—if that were the case, would we actually need peer-reviewed publications at all for such a field? I’m thinking that the public would anyway rather read popular science articles, and that this could be produced with much less effort by science journalists? (Maybe I’m totally misunderstanding your point here, but if not I would be very curious to hear your take on such a model).
Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?
No, the purpose of publishing is not mainly to communicate to the public. After all, very few members of the public read scientific literature. The truth-seeking or engineering achievement the lab is aiming for is one thing. The experiments they run to get closer are another. And the descriptions of those experiments are a third thing. That third thing is what you get from the paper.
I find it useful at this early stage in my career because it helps me find labs doing work that’s of interest to me. Grantmakers and universities find them useful to decide who to give money to or who to hire. Publications show your work in a way that a letter of reference or a line on a resume just can’t. Fellow researchers find them useful to see who’s trying what approach to the phenomena of interest. Sometimes, an experiment and its writeup are so persuasive that they actually persuade somebody that the universe works differently than they’d thought.
As you read more literature and speak with more scientists, you start to develop more of a sense of skepticism and of importance. What is the paper choosing to highlight, and what is it leaving out? Is the justification for this research really compelling, or is this just a hasty grab at a publication? Should I be impressed by this result?
It would be nice for the reader if papers were a crystal-clear guide for a novice to the field. Instead, you need a decent amount of sophistication with the field to know what to make of it all. Conversations with researchers can help a lot. Read their work and then ask if you can have 20 minutes of their time; they’ll often be happy to answer your questions.
And yes, fields do seem to go down dead ends from time to time. My guess is it’s some sort of self-reinforcing selection for biased, corrupt, gullible scientists who’ve come to depend on a cycle of hype-building to get the next grant. Homophilia attracts more people of the same stripe, and the field gets confused.
Tissue engineering is an example. 20-30 years ago, the scientists in that field hyped up the idea that we were chugging toward tissue-engineered solid organs. Didn’t pan out, at least not yet. And when I look at tissue engineering papers today, I fear the same thing might repeat itself. Now we have bioprinters and iPSCs to amuse ourselves with. On the other hand, maybe that’ll be enough to do the trick? Hard to know. Keep your skeptical hat on.
I think that we have a rather similar view actually—maybe it’s just the topic of the post that makes it seem like I am more pessimistic than I am? Even though this post focuses on mapping up problems in the research system, my point is not in any way that scientific research would be useless—rather the opposite, I think it is very valuable, and that is why I’m so interested in exploring if there are ways that it can be improved. It’s not at all my intention to say that research, or researchers, or any other people working in the system for that matter, are “bad”.
My concern is not that published papers are not clear guides that a novice could follow or understand. Especially now that there is an active debate around reproducibility I would also not expect (good) researchers to be naive about it (and that has not at all been my personal experience from working with researchers). Still it seems to me that if reproducibility is lacking in fields that produce a lot of value, initiatives that would improve reproducibility would be very valuable?
From what I have seen so far, I think that the work by OSF (particularly on preregistration) and publications from METRICS seems like it could be impactful—what do you think of these? The ARRIVE guidelines also seem like a very valuable initiative for reporting of research with animals.
All these projects seem beneficial. I hadn’t heard of any of them, so thanks for pointing them out. It’s useful to frame this as “research on research,” in that it’s subject to the same challenges with reproducibility, and with aligning empirical data with theoretical predictions to develop a paradigm, as in any other field of science. Hence, I support the work, while being skeptical of whether such interventions will be useful and potent enough to make a positive change.
The reason I brought this up is that the conversation on improving the productivity of science seems to focus almost exclusively on problems with publishing and reproducibility, while neglecting the skill-building and internal-knowledge aspects of scientific research. Scientists seem to get a feel through their interactions with their colleagues for who is trustworthy and capable, and who is not. Without taking into account the sociology of science, it’s hard to know whether measures taken to address problems with publishing and reproducibility will be focusing on the mechanisms by which progress can best be accelerated.
Honest, hardworking academic STEM PIs seem to struggle with money and labor shortages. Why isn’t there more money flowing into academic scientific research? Why aren’t more people becoming scientists?
The lack of money in STEM academia seems to me a consequence of politics. Why is there political reluctance to fund academic science at higher levels? Is academia to blame for part of this reluctance, or is the reason purely external to academia? I don’t know the answers to these questions, but they seem important to address.
Why don’t more people strive to become academic STEM scientists? Partly, industry draws them away with better pay. Part of the fault lies in our school system, although I really don’t know what exactly we should change. And part of the fault is probably in our cultural attitudes toward STEM.
Many of the pro-reproducibility measures seem to assume that the fastest road to better science is to make more efficient use of what we already have. I would also like to see us figure out a way to produce more labor and capital in this industry. To be clear, I mean that I would like to see fewer people going into non-STEM fields—I am personally comfortable with viewing people’s decision to go into many non-STEM fields as a form of failure to achieve their potential. That failure isn’t necessarily their fault. It might be the fault of how we’ve set up our school, governance, cultural or economic system.