I agree that it’s an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).
Still, my feeling is that this is closer to the truth than we’d want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I’ve seen that the topic (e.g. to round out somebody’s profile) and participants (e.g re-doing experiments somebody else did so they don’t have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it’s very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to ‘demonstrate’ their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.
My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won’t really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more ‘spin’, speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it’s clearly blue-sky research that hasn’t yet found an application.
As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn’t get any, common story) being outside the Ivory Tower can provide a lot more perspective about it’s failings than what you get from inside it.
I agree that it’s an extreme stance and probably overly-general (although the specificity to public health and biomedical research is noted in the article).
Still, my feeling is that this is closer to the truth than we’d want. For instance, from working in three research groups (robotics, neuroscience, basic biology), I’ve seen that the topic (e.g. to round out somebody’s profile) and participants (e.g re-doing experiments somebody else did so they don’t have to be included as an author, instead of just using their results directly) of a paper are often selected mainly on perceived career benefits rather than scientific merit. This is particularly true when the research is driven by junior researchers rather than established professors, as the value of papers to former is much more about if they will help get grants and a faculty position rather than their scientific merit. For example, it’s very common that a group of post-docs and PhDs will collaborate to produce a paper without a professor to ‘demonstrate’ their independence, but these collaborations often just end up describing an orphan finding or obscure method that will never be really be followed up on, and the junior researchers time could arguable have produced more scientifically meaningful results if they focused on their main project. Of course, its hard to evaluate how such practices influence academic progress in the long run, but they seem inefficient in the short-term and stem from a perverse incentive of careerism.
My impression is that questionable research practices probably vary a lot by research field, and the fields most susceptible to using poor practices are probably ones where the value of the findings won’t really be known for a long time, like basic biology research. My experience in neuroscience and biology is that much more ‘spin’, speculation, and story telling goes into presenting the biological findings than there was in robotics (where results are usually clearer steps along a path towards a goal). While a certain amount of story telling is required to present a research finding convincingly, it has become a bit of a one-up game in biology where your work really has to be presented as a critical step towards an applied outcome (like curing a disease, or inspiring a new type of material) for anybody to take it seriously, even when it’s clearly blue-sky research that hasn’t yet found an application.
As for the author, it looks like he is no longer working in Academia. From his publication record it looks like he was quite productive for a mid-career researcher, and although he may have an axe to grind (presumably he applied for many faculty positions but didn’t get any, common story) being outside the Ivory Tower can provide a lot more perspective about it’s failings than what you get from inside it.
I wouldn’t say that there are no inefficiencies in academia. There are inefficiencies in every line of work.
I would say that on the whole, a lot of great still work gets done.
I definitely wouldn’t say that academia is rife with “incompetence in concert with a lack of accountability.”
Sure, there are people with Ph.Ds who are not strong researchers. There are lot of them who are, though.
We may just disagree on the ratio of the two groups based on our own experiences.