Let’s say EAs contribute $50m†… Various estimates of the cost of introducing a drug here, with a 2014 estimate being $2.4bn. I guess EAs could only cover the early stages, with much of the rest being picked up by drug companies or something.
An EA contribution of far less than $50m would be leveraged.
The $2.4bn estimate doesn’t apply well to psychedelics, because there is no cost of drug discovery (the drugs in question have already been discovered).
As a data point, MAPS has shepherded MDMA through the three phases of the FDA approval process with a total spend of ~$30m.
This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn’t trade off against the rollout of any other important drug.
The current most important question for legal MDMA & psilocybin rollout in the US is not when, but at what quality. We’re at a point where the FDA is likely (>50% chance) going to reschedule these drugs within the next 5 years (both have received breakthrough therapy designation from the FDA).
Many aspects of how FDA rescheduling goes are currently undetermined (insurance, price, off-label prescription, set & setting in which the drugs are used). A savvy research agenda + advocacy work could tip these factors in a substantially more favorable direction.
Doing research & advocacy here scales fairly linearly (most study designs I’ve seen cost between $50k-$1m, advocates can be funded for a year for $60-$80k).
Very, very optimistically, 1,000 long-term-focused EAs in the US, 10% of the population suffer from relevant mental health issues, and all of them use the new drug.
From the OP:
The 2019 Slate Star Codex reader survey offers some data here: 17.4% of survey respondents have a formal diagnosis of depression (another 16.7% suspect they are depressed but haven’t been diagnosed); 12.6% of respondents have a formal diagnosis of anxiety (another 18.7% suspect they have anxiety but haven’t been diagnosed).
I think SSC readers are an appropriate comparison class for long-term-focused EAs.
That said, I agree with the thrust of this part of your argument. There just aren’t very many people working on long-termist stuff at present. Once all of these people are currently supported by a comfortable salary, it’s not clear that further spend on their research of any kind is leveraged (i.e. not clear that there’s a mechanism for converting more money to more research product for the present set of researchers, once you pay them a comfortable salary).
So perhaps the argument collapses to:
effect from increasing the amount of long-termist labor + effect from short-termist benefits
And because of your priors, you discount “effect from short-termist benefits” to 0.
I still propose that:
effect from increasing the amount of long-termist labor
is probably worth it.
Doesn’t feel like a stretch, given that this mechanism underpins the case for most of the public-facing work EA does (e.g. 80,000 Hours, CFAR, Paradigm Academy, Will MacAskill’s book).
Probably the crux here is that I think rationality training & the psychedelic experience can achieve similar kinds of behavior change (e.g. less energy spent on negative self-talk & unhelpful personal narratives) such that their effect sizes can be compared.
Whereas you think that rationality training & the psychedelic experience are different enough that believable comparison isn’t possible.
Does that sound right to you?
Rationality projects: I don’t care to arbitrate what counts as EA.
Isn’t much of the present discussion about “what counts as EA?”
Maybe I’m getting hung up on semantics. The question I most care about here is: “what topics should EAs dedicate research capacity & capital to?”
Does that seem like a worthwhile question?
It would be helpful if you could agree with or contest with that claim before we move on to the other upside.
Right. I’m saying that the math we should care about is:
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor + effect from short-termist benefits
I think that math is likely to work out.
Given your priors, we’ve been discounting “effect from short-termist benefits” to 0.
So the math is then:
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor
And I think that is also likely to work out, though the case is somewhat weaker when we discount short-termist benefits to 0.
(I also disagree with discounting short-termist benefits to 0, but that’s doesn’t feel like the crux of our present disagreement.)
If each of them donated an average of U$ 1 for this cause, they would match all of GDs transfers in 2017
fwiw I think it’s very hard to get people to donate to things.
From section 4(b) of the OP: “Roughly $40 million has been committed to psychedelic research since 2000.”
Got it, thanks.
So, I estimate GD results in U$200/QALY…
Enthea’s estimate of psychedelics liberalization is of $472/DALY
As far as I know, GiveWell considers cost-effectiveness estimates as informative for efficacy differences that are orders of magnitude apart.
For two interventions that are on the same order of magnitude, the analyses aren’t granular enough to believably inform which is more effective.
We’ve jumped from emotional blocks & unhelpful personal narratives to life satisfaction & treatment-resistant depression, which are very different.
fwiw I think negative self-talk (a kind of emotional block) & unhelpful personal narratives are big parts of the subjective experience of depression.
As you note, the two effects you’re now comparing (life satisfaction & treatment-resistant depression) aren’t really the same at all.
Comparing dissimilar effects is a core part of EA-style analysis, right?
The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment
Does this mean you think that projects like CFAR & Paradigm Academy shouldn’t be associated with the EA plank?
… specifically long-term-focused EAs in that geography (<0.001% of the population). The math for that seems really unlikely to work out.
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved. (See section 3(a) of the OP.)
The marginal value of each additional value-aligned + capable long-termist is probably quite high.
Got it, thanks!
Curious whether “No clinical evidence on NLP for the treatment of adults with PTSD, GAD, or depression was identified” is an update for you re: NLP’s efficacy.
However psychedelics don’t seem likely to be a particularly effective long term intervention at the moment.
Curious for your thoughts on the long-termist argument I made in the OP?
Trying to legalize psychedelics or improve research for the long term impacts seems quite implausible as an effective intervention.
I’m not really sure what you mean by “improve research for the long term impacts.”
Could you say a bit more about why liberalizing psychedelic access and conducting more academic research on psychedelics seem implausible as effective interventions?
Sure, framing this as “psychedelic interventions in the cause areas of mental health & longterm future” seems okay.
(I’m advocating for the EA community to pay more attention to psychedelic interventions, and I’m agnostic about how to frame that.)
NLP-based approach to treating PTSD, which reportedly has a higher success rate than MAPS has reported. The basic idea behind it has been around for years , without spreading very widely, and without much interest from mainstream science.
From the report you linked to, in the Key Findings section: “No clinical evidence on NLP for the treatment of adults with PTSD, GAD, or depression was identified.”
Could you point me to a citation for NLP having a higher success rate than MDMA for treating PTSD?
Got it. (And thanks for factoring in kindness!)
However, even assuming that the unknown quantities are probably positive, this doesn’t tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training.
There hasn’t been very much research on psychedelics for “well” people yet, largely because under our current academic research regime, it’s hard to organize academic RCTs for drug effects that don’t address pathologies.
The below isn’t quite apples-to-apples, but perhaps it’s helpful as a jumping-off point.
CFAR’s 2015 longitudinal study found:
Life satisfaction increased by d = 0.17 (t(131) = 2.08, p < .05). [effect attributed to attending a CFAR workshop]
Carhart-Harris et al. 2018, a study of psilocybin therapy for treatment-resistant depression, found:
Relative to baseline, marked reductions in depressive symptoms were observed for the first 5 weeks post-treatment (Cohen’s d = 2.2 at week 1 and 2.3 at week 5, both p < 0.001)… Results remained positive at 3 and 6 months (Cohen’s d = 1.5 and 1.4, respectively, both p < 0.001).
Not apples-to-apples, because a population of people with treatment-resistant depression is clearly different than a population of CFAR workshop participants. But both address a question something like “how happy are you with your life?”
Even if you add a steep discount to the Carhart-Harris 2018 effect, the effect size would still be comparable to the CFAR effect size – let’s assume that 90% of the treatment effect is an artifact of the study due to selection effects, small study size, and factors specific to having treatment-resistant depression.
Assuming a 90% discount, psilocybin would still have an adjusted Cohen’s d = 0.14 (6 months after treatment), roughly in the ballpark of the CFAR workshop effect (d = 0.17).
+1 to the importance of psychological safety.
My (weakly held) view is that the median EA org is weak on psychological safety & also underweights its importance (when reflecting on what to prioritize, org-development-wise).
So, despite updating my priors, I still don’t think that donating for this cause would result, in the margin, in more QALY than donating to GD, in general.
What’s your ballpark dollars-per-QALY estimate for GiveDirectly donations, and your ballpark dollars-per-QALY estimate for the psychedelic intervention you have in mind?
This analysis could be helpful as a jumping-off point for the latter.
Also note that the QALY framework likely underweights mental health interventions.
It looks like they haven’t published a recording of the April 2019 meeting: https://www.givewell.org/about/official-records#Boardmeetings
(archived version, archived on 2019-5-22)
Last month, I asked about this on GiveWell’s most recent open thread.
They haven’t replied yet; I just checked about it again on the same thread.
Sidenotes are great!
Inspiring example: https://www.gwern.net/Spaced-repetition
Right, so you would want to show that 30-40% of interventions with similar literatures pan out.
I think we have a disagreement about what the appropriate reference class here is.
The reference class I’m using is something like “results which are supported by 2-3 small-n studies with large effect sizes.”
I’d expect roughly 30-40% of such results to hold up after confirmatory research.
Somewhat related: 62% of results assessed by Camerer et al. 2018 replicated.
It’s a bit complicated to think about replication re: psychedelics because the intervention is showing promise as a treatment for multiple indications (there are a couple studies showing large effect sizes for depression, a couple studies showing large effect sizes for anxiety, a couple studies showing large effect sizes for addictive disorders).
Could you say a little more about what reference class you’re using here?