1) Generally my probability mass is skewed to the lower ends of the intervals I’m noting. Thus the 70% band is more with multiple caveats rather than just one (e.g. a bit like—as Scott describes it—Ketamine: only really useful for depression, and even then generally modest effects even as second line therapy). Likewise the 3% is mostly ‘around SSRIs and maybe slightly better’, with subpercentile mass as the dramatic breakthrough I think you have in mind.
2) Re. updates: There wasn’t a huge update on reading the studies (not that I claim to have examined them closely), because I was at least dimly aware since medical school of psychedelics having some promise in mental health.
Although this was before I appreciated the importance of being quantitative, I imagine I would have given higher estimates back then, with the difference mainly accounted for by my appreciation of how treacherous replication has proven in both medicine and psychology.
Seeing that at least some of the studies were conducted reasonably given their limitations has attenuated this hit, but I had mostly priced this in as I expected to see this (i.e. I wasn’t expecting to see the body of psychedelic work was obviously junk science etc.).
3) Aside: Givewell’s view doesn’t appear to be “1-2% that deworming effects are real”, but:
The “1-2% chance” doesn’t mean that we think that there’s a 98-99% chance that deworming programs have no effect at all, but that we think it’s appropriate to use a 1-2% multiplier compared to the impact found in the original trials – this could be thought of as assigning some chance that deworming programs have no impact, and some chance that the impact exists but will be smaller than was measured in those trials.
I.e. Their central estimate prices across a range of ‘no effect’ ‘modest effect’ ‘as good as the index study advertised’, but weighted towards the lower end.
One could argue whether, if applied to psychedelics, whether the discount factor they suggest should be higher or lower than this (multiple studies would probably push to a more generous discount factors, but an emphasis on quality might point to more pessimistic ones, as the Kremer index study has I think a stronger methodology—and a lot more vetting—than the work noted here). But even something like a discount of ~0.1 would make a lot of the results noted above considerably less exciting (e.g. The Calhart-Harris effect size drops to d~0.3, which is good but puts it back into the ranges seen with existing interventions like CBD).
VoI is distinct from this best guess (analogously, a further deworming RCT to reduce uncertainty may have higher or lower value than ‘exploiting’ based on current uncertainty), but I’d return to my prior remarks to suggest the likelihood of ending up with something ‘(roughly) as good as initial results advertise’ is low/negligible enough not to make it a good EA buy.
4) Further aside: Given the OP was about psychedelics generally (inc advocacy and research) rather than the particular points on whether confirmatory research was a good idea, I’d take other (counter-) arguments addressed more generally than this to be in scope.
1) Generally my probability mass is skewed to the lower ends of the intervals I’m noting. Thus the 70% band is more with multiple caveats rather than just one (e.g. a bit like—as Scott describes it—Ketamine: only really useful for depression, and even then generally modest effects even as second line therapy). Likewise the 3% is mostly ‘around SSRIs and maybe slightly better’, with subpercentile mass as the dramatic breakthrough I think you have in mind.
2) Re. updates: There wasn’t a huge update on reading the studies (not that I claim to have examined them closely), because I was at least dimly aware since medical school of psychedelics having some promise in mental health.
Although this was before I appreciated the importance of being quantitative, I imagine I would have given higher estimates back then, with the difference mainly accounted for by my appreciation of how treacherous replication has proven in both medicine and psychology.
Seeing that at least some of the studies were conducted reasonably given their limitations has attenuated this hit, but I had mostly priced this in as I expected to see this (i.e. I wasn’t expecting to see the body of psychedelic work was obviously junk science etc.).
3) Aside: Givewell’s view doesn’t appear to be “1-2% that deworming effects are real”, but:
I.e. Their central estimate prices across a range of ‘no effect’ ‘modest effect’ ‘as good as the index study advertised’, but weighted towards the lower end.
One could argue whether, if applied to psychedelics, whether the discount factor they suggest should be higher or lower than this (multiple studies would probably push to a more generous discount factors, but an emphasis on quality might point to more pessimistic ones, as the Kremer index study has I think a stronger methodology—and a lot more vetting—than the work noted here). But even something like a discount of ~0.1 would make a lot of the results noted above considerably less exciting (e.g. The Calhart-Harris effect size drops to d~0.3, which is good but puts it back into the ranges seen with existing interventions like CBD).
VoI is distinct from this best guess (analogously, a further deworming RCT to reduce uncertainty may have higher or lower value than ‘exploiting’ based on current uncertainty), but I’d return to my prior remarks to suggest the likelihood of ending up with something ‘(roughly) as good as initial results advertise’ is low/negligible enough not to make it a good EA buy.
4) Further aside: Given the OP was about psychedelics generally (inc advocacy and research) rather than the particular points on whether confirmatory research was a good idea, I’d take other (counter-) arguments addressed more generally than this to be in scope.