That there are various mechanisms (of which I only feel like I understand a few) in complex systems which produce power-law type tails. These can enter as factors, and the convergence back to log-normal we’d expect from the central limit theorem is slowish in the tails.
It seems like this probably depends a lot on what type of intervention you’re studying. I guess I would expect x-risks to have power-law-ish distributions, but I can’t think of very many power-law factors that would influence e.g. scaling up a proven global health intervention.
I agree that the distribution will depend on the kind of intervention. When you take into account indirect effects you may get some power-law type behaviour even in interventions where it looks unlikely, though—for instance coalescing broader societal support around an intervention so that it gets implemented far more than your direct funding provides for.
Our distribution of beliefs about the cost-effectiveness of scaling up something which is “proven” is likely to have particularly thin tails compared to dealing with “unproven” things, as by proof we tend to mean high-quality evidence that substantially tightens the possibilities. I’m not sure whether it changes the eventual tail to a qualitatively different kind of behaviour, or if they’re just quantitatively narrower distributions, though.
It seems like this probably depends a lot on what type of intervention you’re studying. I guess I would expect x-risks to have power-law-ish distributions, but I can’t think of very many power-law factors that would influence e.g. scaling up a proven global health intervention.
I agree that the distribution will depend on the kind of intervention. When you take into account indirect effects you may get some power-law type behaviour even in interventions where it looks unlikely, though—for instance coalescing broader societal support around an intervention so that it gets implemented far more than your direct funding provides for.
Our distribution of beliefs about the cost-effectiveness of scaling up something which is “proven” is likely to have particularly thin tails compared to dealing with “unproven” things, as by proof we tend to mean high-quality evidence that substantially tightens the possibilities. I’m not sure whether it changes the eventual tail to a qualitatively different kind of behaviour, or if they’re just quantitatively narrower distributions, though.