1.3) (Owed to Scott Alexander’s recent post). The psychedelic literature mainly comprises small studies generally conducted by ‘true believers’ in psychedelics and often (but not always) on self-selected and motivated participants. This seems well within the territory of scientific work vulnerable to replication crises.
I think small studies are also more vulnerable to publication bias.
On the flip side, it may be possible that the “true believers” actually are on to something, but they have a hard time formalizing their procedure into something that can be replicated on a massive scale. So if larger studies fail to replicate the results from the small studies, this may be the reason why.
On the flip side, it may be possible that the “true believers” actually are on to something, but they have a hard time formalizing their procedure into something that can be replicated on a massive scale. So if larger studies fail to replicate the results from the small studies, this may be the reason why.
Do you have any examples of this actually happening? I have seen it as an excuse for things that never pan out many times, but I don’t recall an instance of it actually delivering. E.g. in Many Labs 2 and other mass reproducibility efforts, you don’t find a minority of experimenters with a ‘knack’ who get the effect but can’t pass it on to others.
I don’t have data either way, but “knacks” for psychotherapy feel more plausible to me than “knacks” for producing the effects in Many Labs 2 (just skimming over the list of effects here). Like, the strongest version of this claim is that no one is more skilled than anyone else at anything, which seems obviously false.
Suppose we conduct a study of the Feynman problem-solving algorithm: “1. Write down the problem. 2. Think real hard. 3. Write down the solution.” A n=1 study of Richard Feynman finds the algorithm works great, but it fails to replicate on a larger sample. What is your conclusion: that the n=1 result was spurious, or that Feynman has useful things to teach us but the 3-step algorithm didn’t capture them?
I haven’t read enough studies on psychedelics to know how much room there is in the typical procedure for a skilled therapist to make a difference though.
I think small studies are also more vulnerable to publication bias.
On the flip side, it may be possible that the “true believers” actually are on to something, but they have a hard time formalizing their procedure into something that can be replicated on a massive scale. So if larger studies fail to replicate the results from the small studies, this may be the reason why.
Do you have any examples of this actually happening? I have seen it as an excuse for things that never pan out many times, but I don’t recall an instance of it actually delivering. E.g. in Many Labs 2 and other mass reproducibility efforts, you don’t find a minority of experimenters with a ‘knack’ who get the effect but can’t pass it on to others.
I don’t have data either way, but “knacks” for psychotherapy feel more plausible to me than “knacks” for producing the effects in Many Labs 2 (just skimming over the list of effects here). Like, the strongest version of this claim is that no one is more skilled than anyone else at anything, which seems obviously false.
Suppose we conduct a study of the Feynman problem-solving algorithm: “1. Write down the problem. 2. Think real hard. 3. Write down the solution.” A n=1 study of Richard Feynman finds the algorithm works great, but it fails to replicate on a larger sample. What is your conclusion: that the n=1 result was spurious, or that Feynman has useful things to teach us but the 3-step algorithm didn’t capture them?
I haven’t read enough studies on psychedelics to know how much room there is in the typical procedure for a skilled therapist to make a difference though.