This is a really helpful post—thank you! It does blow my mind slightly that this isn’t more broadly practiced, if the argument holds, but I think it holds!
I don’t know enough about the market for academic papers, but I wonder if you’d be interested in writing this up for a more academic audience? You could look at some set of recent RCTs and estimate the potential savings (or, more ambitiously, the increase in power and associated improvement in detecting results)
Given that the argument is statistical rather than practical in any way that is specific to economics or development, do you know if this happens in biomedicine? Many trials often involve pitting newer, more expensive interventions against an existing standard of care.
Thanks Chris, that’s a cool idea. I will give it a go (in a few days, I have an EAG to recover from...)
One thing I should note is that other comments on this post are suggesting this is well known and applied, which doesn’t knock the idea but would reduce the value of doing more promotion. Conversely, my super quick, low-N look into cash RCTs (in my reply below to David Reinstein) suggests it is not so common. Since the approach you suggest would partly involve listing a bunch of RCTs and their treatment/control sizes (so we can see whether they are cost-optimised), it could also serve as a nice check of just how often this adjustment is/isn’t applied in RCTs
For bio, that’s way outside of my field, I defer to Joshua’s comment here on limited participant numbers, which makes sense. Though in a situation like early COVID vaccine trials, where perhaps you had limited treatment doses and potentially lots of willing volunteers, perhaps it would be more applicable? I guess pharma companies are heavily incentivised to optimise trial costs tho, if they don’t do it there’ll be a reason!
This is a really helpful post—thank you! It does blow my mind slightly that this isn’t more broadly practiced, if the argument holds, but I think it holds!
I don’t know enough about the market for academic papers, but I wonder if you’d be interested in writing this up for a more academic audience? You could look at some set of recent RCTs and estimate the potential savings (or, more ambitiously, the increase in power and associated improvement in detecting results)
Given that the argument is statistical rather than practical in any way that is specific to economics or development, do you know if this happens in biomedicine? Many trials often involve pitting newer, more expensive interventions against an existing standard of care.
Thanks Chris, that’s a cool idea. I will give it a go (in a few days, I have an EAG to recover from...)
One thing I should note is that other comments on this post are suggesting this is well known and applied, which doesn’t knock the idea but would reduce the value of doing more promotion. Conversely, my super quick, low-N look into cash RCTs (in my reply below to David Reinstein) suggests it is not so common. Since the approach you suggest would partly involve listing a bunch of RCTs and their treatment/control sizes (so we can see whether they are cost-optimised), it could also serve as a nice check of just how often this adjustment is/isn’t applied in RCTs
For bio, that’s way outside of my field, I defer to Joshua’s comment here on limited participant numbers, which makes sense. Though in a situation like early COVID vaccine trials, where perhaps you had limited treatment doses and potentially lots of willing volunteers, perhaps it would be more applicable? I guess pharma companies are heavily incentivised to optimise trial costs tho, if they don’t do it there’ll be a reason!
Often recruiting is the bottleneck in biomedicine so you want to maximise the power for a given number of participants