Some very interesting thoughts here. I think your final points are excellent, particularly #2. It does seem that experts in some fields have a hard-won humility about the ability of data to answer the central questions in their fields, and that perhaps we should use this as a sort of prior guideline for distributing future research resources.
I just want to note that I think the focus on sample size here is somewhat misplaced. N = 200 is by no means a crazily small sample size for an RCT, particularly when units are villages, administrative units, etc. As you note, suitably large effect sizes are reliably statistically distinguishable from zero in this context. This is true even with considerably smaller samples—even N = 20! Randomizations even of small samples are relatively unlikely to be unbalanced on confounders, and the p-values yielded by now-common methods like randomization inference express exactly this likelihood. To me—and I mean this exclusively in the context of rigorously designed and executed RCTs—this concern can be addressed by greater attention to the actual size of resulting p-values: our threshold for accepting the non-null finding of a high-variance, small-sample RCT should perhaps be some very much lower value.
It is true that when there is high variance across units, statistically significant effects are necessarily large; this can obviously lead to some misleading results. Your point is well-taken in this context: if, for example, there are only 20 administrative units in country X, and we are able to randomize some educational intervention across units that could plausibly increase graduation rates only by 1%, but the variance in graduation rates across units is 5%, well, we’re unlikely to find anything useful. But it remains statistically possible to do so given a strong enough effect!
I think I would stick to my guns on the sample size point, although I think you would agree with it if I had expressed it better in the OP.
I agree with you sample sizes of 200 (or 20, or less) can be good enough depending on the context. My core claim is these contexts do not obtain for lots of EA problems: the units vary a lot, the variance is explained by other factors than the one we’re interested in, and the variance explained by the intervention/factor of interest will be much smaller (i.e. high variance across units, small effect sizes).
[My intuition driving the confounders point is the balancing these doesn’t look feasible if they are sufficiently heavy-tailed (e.g. take all countries starting with A-C, and randomly assign to two arms, these arms will tend to have very large differences in (say) mean GDP), and the implied premise being lots of EA problems will be ones where factors like these are expected to have greater effect than the upper bound on the intervention. I may be off-base.]
Thanks for responding. I’ve now reread your post (twice) and I feel comfortable in saying that I twisted myself up reading it the first time around. I don’t think my comment is directly relevant to the point you’re making, and I’ve retracted it. The point is well-taken, and I think it holds up.
Some very interesting thoughts here. I think your final points are excellent, particularly #2. It does seem that experts in some fields have a hard-won humility about the ability of data to answer the central questions in their fields, and that perhaps we should use this as a sort of prior guideline for distributing future research resources.
I just want to note that I think the focus on sample size here is somewhat misplaced. N = 200 is by no means a crazily small sample size for an RCT, particularly when units are villages, administrative units, etc. As you note, suitably large effect sizes are reliably statistically distinguishable from zero in this context. This is true even with considerably smaller samples—even N = 20! Randomizations even of small samples are relatively unlikely to be unbalanced on confounders, and the p-values yielded by now-common methods like randomization inference express exactly this likelihood. To me—and I mean this exclusively in the context of rigorously designed and executed RCTs—this concern can be addressed by greater attention to the actual size of resulting p-values: our threshold for accepting the non-null finding of a high-variance, small-sample RCT should perhaps be some very much lower value.
It is true that when there is high variance across units, statistically significant effects are necessarily large; this can obviously lead to some misleading results. Your point is well-taken in this context: if, for example, there are only 20 administrative units in country X, and we are able to randomize some educational intervention across units that could plausibly increase graduation rates only by 1%, but the variance in graduation rates across units is 5%, well, we’re unlikely to find anything useful. But it remains statistically possible to do so given a strong enough effect!
I think I would stick to my guns on the sample size point, although I think you would agree with it if I had expressed it better in the OP.
I agree with you sample sizes of 200 (or 20, or less) can be good enough depending on the context. My core claim is these contexts do not obtain for lots of EA problems: the units vary a lot, the variance is explained by other factors than the one we’re interested in, and the variance explained by the intervention/factor of interest will be much smaller (i.e. high variance across units, small effect sizes).
[My intuition driving the confounders point is the balancing these doesn’t look feasible if they are sufficiently heavy-tailed (e.g. take all countries starting with A-C, and randomly assign to two arms, these arms will tend to have very large differences in (say) mean GDP), and the implied premise being lots of EA problems will be ones where factors like these are expected to have greater effect than the upper bound on the intervention. I may be off-base.]
Thanks for responding. I’ve now reread your post (twice) and I feel comfortable in saying that I twisted myself up reading it the first time around. I don’t think my comment is directly relevant to the point you’re making, and I’ve retracted it. The point is well-taken, and I think it holds up.