I’m Jonathan Nankivell, an undergraduate in my last year studying Mathematics. My interests are in ML and collaborative epistemics.
I had to discover EA twice before it stuck. My first random walk was ‘psychology → big five framework → principle component analysis → pol.is → radical exchange → EA’ and my second was ‘effect of social media → should I read the news? → Ezra Klein on the 80,000 hours podcast → EA’.
Good comment and good points.
I guess the aim of my post was two-fold:
In all the discussion of the explore-exploit trade-off, I’ve never heard anyone describe it as a frontier that you can be on or off. The explore-exploit frontier is hopefully a useful framework to add to this dialogue.
The literature on clinical trial design is imo full of great ideas never tried. This is partly due to actual difficulties and partly due to a general lack of awareness about the benefits they offer. I think we need good writing for a generalist audience on this topic and this my attempt.
You’re definitely right that the caveat is a large one. Adaptive designs are not appropriate everywhere, which is why this post raises points for discussion and doesn’t provide a fixed prescription.
To respond to your specific points.
Section three discusses whether adaptive designs lead to
a substantial chance of allocating more patients to an inferior treatment
reducing statistical power
making statistical inference more challenging
making robust inference difficult if there is potential for time trends
making the trial more challenging to implement in practice.
My understanding of the authors’ position is that it depends on the trial design. Drop-the-Loser, for example, would perform very well on issues 1 through 4. Other methods, less so. I only omit 5 as CRO are currently ill-equipped to run these studies—there’s no fundamental reason for this and if demand increased, this obstacle would reduce. In the mean time, this unfortunately does raise the burden on the investigating team.
This is not an objection I’ve heard before. I presume the effect of this would be equivalent to the presence of a time trend. Hence some designs would perform well (DTL, DBCD, etc) and others wouldn’t (TS, FLGI, etc).
This is often true, although generalised methods built to address this can be found. See here for example.
In summary: While I think that these difficulties can often be overcome, they should not be ignored. Teams should go in eyes open, aware that they may have to do more themselves than typical. Read, discuss, make a plan, implement it. Know each option’s drawbacks. Also know their advantages.
Hope that makes sense.