I agree that limitations on RCTs are a reason to devalue them relative to other methodologies. They still add value over our priors, but I think the best use cases for RCTs are when they’re cheap and can be done at scale (Eg. in the context of online surveys) or when you are randomizing an expensive intervention that would be provided anyway such that the relative cost of the RCT is cheap.
I agree that it would be important to weigh the costs and benefits—I don’t think it’s exclusively an issue with RCTs, though.
One thing that could help in doing this calculus is a better understanding of when our non-study-informed beliefs are likely to be accurate.
I know at least some researchers are working in this area—Stefano DellaVigna and Devin Pope are looking to follow up their excellent papers on predictions with another one looking at how well people predict results based on differences in context, and Aidan Coville and I also have some work in this area using impact evaluations in development and predictions gathered from policymakers, practitioners, and researchers.
Would the development of a VoI checklist be helpful here? Heuristics and decision criteria similar to the flowchart that the Campbell collab. has for experimental design heuristics.
I agree that limitations on RCTs are a reason to devalue them relative to other methodologies. They still add value over our priors, but I think the best use cases for RCTs are when they’re cheap and can be done at scale (Eg. in the context of online surveys) or when you are randomizing an expensive intervention that would be provided anyway such that the relative cost of the RCT is cheap.
When costs of RCTs are large, I think there’s reason to favor other methodologies, such as regression discontinuity designs, which have faired quite well compared to RCTs (https://onlinelibrary.wiley.com/doi/abs/10.1002/pam.22051).
I agree that it would be important to weigh the costs and benefits—I don’t think it’s exclusively an issue with RCTs, though.
One thing that could help in doing this calculus is a better understanding of when our non-study-informed beliefs are likely to be accurate.
I know at least some researchers are working in this area—Stefano DellaVigna and Devin Pope are looking to follow up their excellent papers on predictions with another one looking at how well people predict results based on differences in context, and Aidan Coville and I also have some work in this area using impact evaluations in development and predictions gathered from policymakers, practitioners, and researchers.
Would the development of a VoI checklist be helpful here? Heuristics and decision criteria similar to the flowchart that the Campbell collab. has for experimental design heuristics.