Reality is often underpowered

Introduction

When I worked as a doctor, we had a lecture by a paediatric haematologist, on a condition called Acute Lymphoblastic Leukaemia. I remember being impressed that very large proportions of patients were being offered trials randomising them between different treatment regimens, currently in clinical equipoise, to establish which had the edge. At the time, one of the areas of interest was, given the disease tended to have a good prognosis, whether one could reduce treatment intensity to reduce the long term side-effects of the treatment whilst not adversely affecting survival.

On a later rotation I worked in adult medicine, and one of the patients admitted to my team had an extremely rare cancer,[1] with a (recognised) incidence of a handful of cases worldwide per year. It happened the world authority on this condition worked as a professor of medicine in London, and she came down to see them. She explained to me that treatment for this disease was almost entirely based on first principles, informed by a smattering of case reports. The disease unfortunately had a bleak prognosis, although she was uncertain whether this was because it was an aggressive cancer to which current medical science has no answer, or whether there was an effective treatment out there if only it could be found.

I aver that many problems EA concerns itself with are closer to the second story than the first. That in many cases, sufficient data is not only absent in practice but impossible to obtain in principle. Reality is often underpowered for us to wring the answers from it we desire.

Big units of analysis, small samples

The main driver of this problem for ‘EA topics’ is that the outcomes of interest have units of analysis for which the whole population (leave alone any sample from it) is small-n: e.g. outcomes at the level of a whole company, or a whole state, or whole populations. For these big unit of analysis/​small sample problems, RCTs face formidable in principle challenges:

  1. Even if by magic you could get (e.g.) all countries on earth to agree to randomly allocate themselves to policy X or Y, this is merely a sample size of ~200. If you’re looking at companies relevant to cage-free campaigns, or administrative regions within a given state, this can easily fall another order of magnitude.

  2. These units of analysis tend highly heterogeneous, almost certainly in ways that affect the outcome of interest. Although the key ‘selling point’ of the RCT is it implicitly controls for all confounders (even ones you don’t know about), this statistical control is a (convex) function of sample size, and isn’t hugely impressive at ~ 100 per arm: it is well within the realms of possibility for the randomisation happen to give arms with unbalanced allocation of any given confounding factor.

  3. ‘Roughly’ (in expectation) balanced intervention arms are unlikely to be good enough in cases where the intervention is expected to have much less effect on the outcome than other factors (e.g. wealth, education, size, whatever), thus an effect size that favours one arm or the other can be alternatively attributed to one of these.

  4. Supplementing this raw randomisation by explicitly controlling for confounders you suspect (cf. block randomisation, propensity matching, etc.) has limited value when don’t know all the factors which plausibly ‘swamp’ the likely intervention effect (i.e. you don’t have a good predictive model for the outcome but-for the intervention tested). In any case, they tend to trade-off against the already scarce resource of sample size.

These ‘small sample’ problems aren’t peculiar to RCTs, but endemic to all other empirical approaches. The wealth of econometric and quasi-experimental methods (e.g. IVs, regression discontinuity analysis), still run up against hard data limits, as well those owed to in whatever respect they fall short of the ‘ideal’ RCT set-up (e.g. imperfect instrumentation, omitted variable bias, nagging concerns about reverse causation). Qualitative work (case studies, etc.) have the same problems even if other ones (e.g. selection) loom larger.

Value of information and the margin of common-sense

None of this means such work has zero value—big enough effect sizes can still be reliably detected, and even underpowered studies still give us information. But we may learn very little on the margin of common sense. Suppose we are interested in ‘what makes social movements succeed or fail?’ and we retrospectively assess a (somehow) representative sample of social movements. It seems plausible the results of this investigation is the big (and plausibly generalisable) hits may prove commonsensical (e.g. “Social movements are more likely to grow if members talk to other people about the social movement”), whilst the ‘new lessons’ remain equivocal and uncertain.

We should expect to see this if we believe the distribution of relevant effect sizes is heavy-tailed, with most of the variance in (say) social movement success owed to a small number of factors, with the rest comprised of a large multitude of smaller effects. In such case, modest increases in information (e.g. from small sample data) may bring even more modest increases in either explaining the outcome or identifying what contributes to it:

Imgur

Toy example, where we propose a roughly pareto distribution of effect size among contributory factors. The largest factors (which nonetheless explain a minority of the variance) may prove to be obvious to the naked eye (blue). Adding in the accessible data may only slightly lower detection threshold, with modest impacts on identifying further factors (green) and overall accuracy. The great bulk of the variance remains in virtue of a large ensemble of small factors which cannot be identified (red). Note that detection threshold tends to have diminishing returns with sample size.

The scientific revolution for doing good?

The foregoing should not be read as general scepticism to using data. The triumphs of evidence-based medicine, although not unalloyed, have been substantial, and there remain considerable gains that remain on the table (e.g. leveraging routine clinical practice). The ‘randomista’ trend in international development is generally one to celebrate, especially (as I understand) it increasingly aims to isolate factors that have credible external validity. The people who run cluster-randomised, stepped-wedge, and other study designs with big units of analysis are not ignorant of their limitations, and can deploy these judiciously.

But it should temper our enthusiasm about how many insights we can glean by getting some data and doing something sciency to it.[2] The early successes of EA in global health owes a lot to this being one of the easier areas to get crisp, intersubjective and legible answers from a wealth of available data. For many to most other issues, data-driven demonstration of ‘what really works’ will never be possible.

We see that people do better than chance (or better than others) in terms of prediction and strategic judgement. Yet, at least judging by the superforecasters (this writeup by AI impacts is an excellent overview), how they do is much more indirectly data-driven: one may have to weigh between several facially-relevant ‘base rates’, adjusting these rates by factors where the coefficient may be estimated by role in loosely analogous cases, and so forth.[3] Although this process may be informed by statistical and numerical literacy (e.g. decomposition, ‘fermi-ization’), it seems to me the main action going on ‘under the hood’ is developing a large (and implicit, and mostly illegible) set of gestalts and impressions to determine how to ‘weigh’ relevant data that is nonetheless fairly remote to the question at issue.[4]

Three final EA takeaways:

  1. Most who (e.g.) write up a case study or a small-sample analysis tend to be well aware of the limitations of their work. Nonetheless I think it is worth paying more attention to how these bear on overall value of information before one embarks on these pieces of work. Small nuggets of information may not be worth the time to excavate even when the audience are ideal reasoners. As they aren’t, one risks them (or yourself) over-weighing their value when considering problems which should demand tricky aggregation of a multitude of data sources.

  2. There can be good reasons why expert communities in some areas haven’t tried to use data explicitly to answer problems in their field. In these cases, the ‘calling card’ of EA-style analysis of doing this anyway can be less of a disruptive breakthrough and more a stigma of intellectual naivete.

  3. In areas where ‘being driven by the data’ isn’t a huge advantage, it can be hard to identify an ‘edge’ that the EA community has. There are other candidates: investigating topics neglected by existing work, better aligned incentives, etc. We should be sceptical of stories which boil down a generalized ‘EA exceptionalism’.


  1. ↩︎

    Its name escapes me, although arguably including it would risk deductive disclosure. To play it safe I’ve obfuscated some details.

  2. ↩︎

    And statistics and study design generally prove hard enough that experts often go wrong. Given the EA community’s general lack of cultural competence in these areas, I think their (generally amateur) efforts at the same have tended to fare worse.

  3. ↩︎

    I take as supportive evidence a common feature among superforecasters is they read a lot—not just in areas closely relevant to their forecasts, but more broadly across history, politics, etc.

  4. ↩︎

    Something analogous happens in other areas of ‘expert judgement’, whereby experts may not be able to explain why they made a given determination. We know that this implicit expert judgement can be outperformed by simple ‘reasoned rules’. My suspicion, however, is it still performs better than chance (or inexpert judgement) when such rules are not available.