I think the biggest obstacle with funding initiatives like this is definitely that it’s very hard to even just identify a single potentially promising project without looking into the space for quite some time. We don’t really have resources available for extensive proactive investigation into grant areas, so someone I reasonably trust suggesting this as a potential meta-science initiative is definitely the biggest reason for us making this grant.
In general, as I mentioned in one of the sections in the writeup above, we are currently constrained to primarily do reactive grantmaking, and so are unlikely to fund projects that did not apply to the fund or were high on our list of obvious places to maybe give money to.
I have a strong interest in meta-science initiatives, and Chris Chambers was the only project this round that applied in that space, so that combination was definitely a major factor.
However, I do also think that Chambers has achieved some pretty impressive results with his work so far:
Chambers keeps an online spreadsheet with all the journals that have adopted the format [262].
To date, 140 journals have adopted them so far and the fields covered are:
+ Life/medical sciences: neuroscience, nutrition, psychology, psychiatry, biology, cancer research, ecology, clinical & preclinical medicine, endocrinology, agricultural and soil sciences
+ Social sciences: political science, financial and accounting research
+ Physical sciences: chemistry, physics, computer science etc.
+ Generalist journals that cover multiple fields: Royal Society Open Science and Nature Human Behaviour
His success so far has made this one of the most successful preregistration projects I know of to-date, and it seems likely that further funding will relatively straightforwardly generalize to more journals offering registered-reports as a potential way to publish.
I will focus on where I disagree with the the Chris Chambers / Registered Reports grant (note: this is Let’s Fund’s grantee, the organization I co-founded).
1. What if all clinical trials became Registered Reports?
You write:
“Chambers has the explicit goal of making all clinical trials require the use of registered reports. That outcome seems potentially quite harmful, and possibly worse than the current state of clinical science.”
I think, if all clinical trials became Registered Reports, then there’d be net benefits.
In essence, if you agree that all clinical trials should be preregistered, then Registered reports is merely preregistration taken to its logical conclusion by being more stringent (i.e. peer-reviewed, less vague etc.).
“The principal differences between pre-registration and Registered Reports are:
In pre-registration, trial outcomes or dependent variables and the way of analyzing them are not described as precisely as could be done in a paper
Pre-registration is not peer-reviewed
Pre-registration also often does not describe the theory that is being tested.
For the reason, simple pre-registration might not be as good as Registered Reports. For instance, in cancer trials, the descriptions of what will be measured are often of low quality i.e. vague, leading to ‘outcome switching’ (i.e. switching between planned and published outcomes) [180], [181]. Moreover, data processing can often involve very many seemingly reasonable options for excluding or transforming data[182], which can then be used for data dredging pre-registered trials (“With 20 binary choices, 220 = 1,048,576 different ways exist to analyze the same data.” [183]). Theoretically, preregistration could be more exhaustive and precise, but in practice, it rarely is, because it is not peer-reviewed.”
Also, note that exploratory analysis can still be used in Registered Reports, if it’s clearly labelled as exploratory.
----
2. Value of information and bandwidth constraints
You write:
“Ultimately, from a value of information perspective, it is totally possible for a study to only be interesting if it finds a positive result, and to be uninteresting when analyzed pre-publication from the perspective of the editor.“
Generally, a scientist’s priors regarding the likelihood of treatment being successful should be roughly proportional to the value of information. In other words, if the likelihood that a treatment is successful is trivially low, then it is likely too expensive to be worth running or will increase the false positive rate.
On bandwidth constraints: this seems now largely a historical artifact from pre-internet days, when journals only had limited space and no good search functionality. Back then, it was good that you had a journal like Nature that was very selective and focused on positive results. These days, we can publish as many high-quality null-result papers online in Nature as we want to without sacrifice, because people don’t read a dead tree copy of Nature front to back. Scientists now solve the bandwidth constraint differently (e.g. internet keyword searches, how often a paper is cited, and whether their colleagues on social media share it).
In your example, you can combine all 100 potential treatments into one paper and then just report whether it worked or not. The cost of reporting that a study was carried out are trivial compared to others. If the scientist doesn’t believe any results are worth reporting they can just not report them, and we will still have the record of what was attempted (similar to it being good that we can see unpublished preregistrations on trials.gov that never went anywhere as data on the size of publication bias).
3. Implications of major journals implementing Registered reports
You write:
“Because of dynamics like this, I think it is very unlikely that any major journals will ever switch towards only publishing registered report-based studies, even within clinical trials, since no journal would want to pass up on the opportunity to publish a study that has the opportunity to revolutionize the field.”
This is traded-off by top journals publishing biased results (which follows directly from auction theory where the highest bidder is more likely to pay more than the true price; similarly, people who publish in Nature will be more likely to overstate their results. This is borne out empirically. See https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0050201)
Registered Reports are simply more trustworthy and this might change the dynamics so that there’ll be pressure for journals to adopt the registered Reports format or fall behind in terms of impact factor.
--
3.1 On clarity
You write:
“As a result, large parts of the paper basically have no selection applied to them for conceptual clarity,”
On clarity: Registered reports will have more clarity because they’re more theoretically motivated (see https://lets-fund.org/better-science/#h.n85wl9bxcln4) and the reviewers, instead of being impressed by results, are judging papers more on how detailed and clear the methodology is described. This might aid replication attempts and will likely also be a good proxy of the clarity of the conclusion. Scientists are still incentivized to write good conclusions, because they want their work to be cited. Also, the importance of the conclusion will be deemphasized. In the optimal case of a RR, “ a comprehensive and analytically sophisticated design, vetted down to each single line of code by the reviewers before data collection began,” https://www.nature.com/articles/s41562-019-0652-0 is what happens during the review.
What is missing from the results section is pretty much only the final numbers that are plugged in after review and data collection and the result section then “writes itself”. The conclusion section is perhaps almost unnecessary, if the introduction already motivates the implications of the research results and is already used as a more extensive speculative summary in many papers.
I think the conclusion section will be quite short and not very important section in registered reports as is increasingly the case (in Nature, there’s sometimes no “redundant” conclusion section).
---
4. Is reducing red tape more important?
You write:
>>Excessive red tape in clinical research seems like one of the main problems with medical science today
I don’t think excessive red tape is one of the main problems with medical science (say on the same level of publication bias), that there are no benefits of IRBs, nor that Registered Reports adds red tape or has much to do with the issue you cite. I think a much bigger problem is research waste as outlined in the Let’s Fund report.
Most scientists who publish Registered Reports describe the publication experience as quite pleasant with a bit of front-loaded work (see e.g. https://twitter.com/Prolific/status/1153286158983581696). In my view, the benefits far outweigh the costs.
5. On Differential technological development aspect of Registered Reports
This is great, and I think these counterpoints are valuable to read for anyone interested in this topic. I disagree with sections of this (and sometimes agree but just think the balance of considerations plays out differently), and will try to find the time to respond to this in more detail in at least the coming weeks.
Note: I think this comment would be considerably easier for me to engage with if it were split into three comments, at the points where you have a break using ‘—’.
Also if the formatting of quotes was done using the style native to the editor, where you use a > then a space, it would make it easier for me to read.
Datapoint for Hauke: I also am very interested in this topic and Hauke’s thoughts on it but found the formatting made it difficult for me to read it fully
Ok, let me go into more detail on that.
I think the biggest obstacle with funding initiatives like this is definitely that it’s very hard to even just identify a single potentially promising project without looking into the space for quite some time. We don’t really have resources available for extensive proactive investigation into grant areas, so someone I reasonably trust suggesting this as a potential meta-science initiative is definitely the biggest reason for us making this grant.
In general, as I mentioned in one of the sections in the writeup above, we are currently constrained to primarily do reactive grantmaking, and so are unlikely to fund projects that did not apply to the fund or were high on our list of obvious places to maybe give money to.
I have a strong interest in meta-science initiatives, and Chris Chambers was the only project this round that applied in that space, so that combination was definitely a major factor.
However, I do also think that Chambers has achieved some pretty impressive results with his work so far:
His success so far has made this one of the most successful preregistration projects I know of to-date, and it seems likely that further funding will relatively straightforwardly generalize to more journals offering registered-reports as a potential way to publish.
Thank you for the detailed write-ups.
I will focus on where I disagree with the the Chris Chambers / Registered Reports grant (note: this is Let’s Fund’s grantee, the organization I co-founded).
1. What if all clinical trials became Registered Reports?
You write:
I think, if all clinical trials became Registered Reports, then there’d be net benefits.
In essence, if you agree that all clinical trials should be preregistered, then Registered reports is merely preregistration taken to its logical conclusion by being more stringent (i.e. peer-reviewed, less vague etc.).
Relevant quote from the Let’s Fund report (Lets-Fund.org/Better-Science):
“The principal differences between pre-registration and Registered Reports are:
In pre-registration, trial outcomes or dependent variables and the way of analyzing them are not described as precisely as could be done in a paper
Pre-registration is not peer-reviewed
Pre-registration also often does not describe the theory that is being tested.
For the reason, simple pre-registration might not be as good as Registered Reports. For instance, in cancer trials, the descriptions of what will be measured are often of low quality i.e. vague, leading to ‘outcome switching’ (i.e. switching between planned and published outcomes) [180], [181]. Moreover, data processing can often involve very many seemingly reasonable options for excluding or transforming data[182], which can then be used for data dredging pre-registered trials (“With 20 binary choices, 220 = 1,048,576 different ways exist to analyze the same data.” [183]). Theoretically, preregistration could be more exhaustive and precise, but in practice, it rarely is, because it is not peer-reviewed.”
Also, note that exploratory analysis can still be used in Registered Reports, if it’s clearly labelled as exploratory.
----
2. Value of information and bandwidth constraints
You write:
Generally, a scientist’s priors regarding the likelihood of treatment being successful should be roughly proportional to the value of information. In other words, if the likelihood that a treatment is successful is trivially low, then it is likely too expensive to be worth running or will increase the false positive rate.
On bandwidth constraints: this seems now largely a historical artifact from pre-internet days, when journals only had limited space and no good search functionality. Back then, it was good that you had a journal like Nature that was very selective and focused on positive results. These days, we can publish as many high-quality null-result papers online in Nature as we want to without sacrifice, because people don’t read a dead tree copy of Nature front to back. Scientists now solve the bandwidth constraint differently (e.g. internet keyword searches, how often a paper is cited, and whether their colleagues on social media share it).
In your example, you can combine all 100 potential treatments into one paper and then just report whether it worked or not. The cost of reporting that a study was carried out are trivial compared to others. If the scientist doesn’t believe any results are worth reporting they can just not report them, and we will still have the record of what was attempted (similar to it being good that we can see unpublished preregistrations on trials.gov that never went anywhere as data on the size of publication bias).
3. Implications of major journals implementing Registered reports
You write:
This is traded-off by top journals publishing biased results (which follows directly from auction theory where the highest bidder is more likely to pay more than the true price; similarly, people who publish in Nature will be more likely to overstate their results. This is borne out empirically. See https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0050201)
Registered Reports are simply more trustworthy and this might change the dynamics so that there’ll be pressure for journals to adopt the registered Reports format or fall behind in terms of impact factor.
--
3.1 On clarity
You write:
On clarity: Registered reports will have more clarity because they’re more theoretically motivated (see https://lets-fund.org/better-science/#h.n85wl9bxcln4) and the reviewers, instead of being impressed by results, are judging papers more on how detailed and clear the methodology is described. This might aid replication attempts and will likely also be a good proxy of the clarity of the conclusion. Scientists are still incentivized to write good conclusions, because they want their work to be cited. Also, the importance of the conclusion will be deemphasized. In the optimal case of a RR, “ a comprehensive and analytically sophisticated design, vetted down to each single line of code by the reviewers before data collection began,” https://www.nature.com/articles/s41562-019-0652-0 is what happens during the review.
What is missing from the results section is pretty much only the final numbers that are plugged in after review and data collection and the result section then “writes itself”. The conclusion section is perhaps almost unnecessary, if the introduction already motivates the implications of the research results and is already used as a more extensive speculative summary in many papers.
I think the conclusion section will be quite short and not very important section in registered reports as is increasingly the case (in Nature, there’s sometimes no “redundant” conclusion section).
---
4. Is reducing red tape more important?
You write:
I don’t think excessive red tape is one of the main problems with medical science (say on the same level of publication bias), that there are no benefits of IRBs, nor that Registered Reports adds red tape or has much to do with the issue you cite. I think a much bigger problem is research waste as outlined in the Let’s Fund report.
Most scientists who publish Registered Reports describe the publication experience as quite pleasant with a bit of front-loaded work (see e.g. https://twitter.com/Prolific/status/1153286158983581696). In my view, the benefits far outweigh the costs.
5. On Differential technological development aspect of Registered Reports
On differential tech development and perhaps as an aside: note that more reliable science has wide-ranging consequences for many other cause areas in EA. Not only global development has had problems with replicability (e.g. https://blogs.worldbank.org/impactevaluations/pre-results-review-journal-development-economics-lessons-learned-so-far and the “worm wars”), but also areas related to GBCRs (e.g. there’s a new Registered Reports initiative for research on Influenza see https://cos.io/our-services/research/flu-lab/).
This is great, and I think these counterpoints are valuable to read for anyone interested in this topic. I disagree with sections of this (and sometimes agree but just think the balance of considerations plays out differently), and will try to find the time to respond to this in more detail in at least the coming weeks.
Note: I think this comment would be considerably easier for me to engage with if it were split into three comments, at the points where you have a break using ‘—’.
Thanks for the heads up—I’ve cleaned up the formatting now to make it more readable.
Datapoint for Hauke: I also am very interested in this topic and Hauke’s thoughts on it but found the formatting made it difficult for me to read it fully