The best argument against most things being ‘an EA cause area’† is simply that there is insufficient evidence in favour of the thing being a top priority.
I think future generations probably matter morally, so the information in sections 3(a), 3(b) and 4 matter most to me. I don’t see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is. There is info about what a causal mechanism might be, but analysis of the strength is also needed. (For example, you say that psychedelic interventions are plausibly in the same ballpark of effectiveness of other interventions that increase the set of well-intentioned + capable people. I only agree with this because you use the word ‘plausibly’, and plausibly...in the same ballpark isn’t enough to make something an EA cause area.) I think similarly about previous discussion I’ve seen about the sign and magnitude of psychedelic interventions on the long-term future. (I’m also pretty sceptical of some of the narrower claims about psychedelics causing self-improvement.††)
I did appreciate your coverage in section 4 of the currently small amount of funding and what is getting done as a result, which seems like it could form part of a more thorough analysis.†††
My amateur impression is that Michael Plant has made a decent start on quantifying near-term effects, though I don’t think anyone should take my opinion on that very seriously. Regardless of that start looking good, I would be unsurprised if most people who put less weight on future generations than me still wanted a more thorough analysis before directing their careers towards the cause.
As I said, it’s a boring answer, but it’s still my true objection to prioritising this area. I also think negative PR is a material consideration, but I figured someone else will cover that.
-----
† Here I’m assuming that ‘psychedelics being an EA cause area’ would eventually involve effort on a similar scale to the areas you’re directly comparing it to, such as global health (say ~100 EAs contributing to it, ~$10m in annual donations by EA-aligned people). If you weaken ‘EA cause area’ to mean ‘someone should explore this’, then my argument doesn’t work, but the question would then be much less interesting.
†† I think mostly this comes from me being pretty sceptical of claims of self-improvement which don’t have fairly solid scientific backing. (e.g. I do deep breathing because I believe that the evidence base is good, but I think most self-improvement stuff is random noise.) I think that the most important drivers of my intuitions for how to handle weakly-evidenced claims have been my general mathematical background, a few week-equivalents trying to understand GiveWell’s work, this article on the optimiser’s curse, and an attempt to simulate the curse to get a sense of its power. Weirdness aversion and social stuff may be incorrectly biasing me, but e.g. I bought into a lot of the weirdest arguments around transformative AI before my friends at the time did, so I’m not too worried about that.
††† I also appreciated the prize incentive, without which I might not have written this comment.
There is info about what a causal mechanism might be, but analysis of the strength is also needed.
Curious for your take on this part of the OP:
So, to the extent that the EA community is limited by information & technique transfer, I’d expect conceptual rationality training to be more leveraged.
To the extent that the EA community is limited by emotional blocks & unhelpful personal narratives, I’d expect the psychedelic experience to be more leveraged.
My current view is that the EA community is more limited by emotional blocks & unhelpful personal narratives. The 2019 Slate Star Codex reader survey offers some data here: 17.4% of survey respondents have a formal diagnosis of depression (another 16.7% suspect they are depressed but haven’t been diagnosed); 12.6% of respondents have a formal diagnosis of anxiety (another 18.7% suspect they have anxiety but haven’t been diagnosed).
I believe you when you say that psychedelic experiences have an effect of some (unknown) size on emotional blocks & unhelpful personal narratives, and that this would change workers’ effectiveness by some (unknown) amount. However, even assuming that the unknown quantities are probably positive, this doesn’t tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training.
Nonetheless, I think your arguments should be either compelling or something of a wake-up call for some readers. For example, if a reader does not require careful, quantified arguments to justify their favoured cause area†, they should also not require careful, quantified arguments about other things (including psychedelics).
† For example, but by no means exclusively, rationality training.
[Edited for kindness while keeping the meaning the same.]
However, even assuming that the unknown quantities are probably positive, this doesn’t tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training.
There hasn’t been very much research on psychedelics for “well” people yet, largely because under our current academic research regime, it’s hard to organize academic RCTs for drug effects that don’t address pathologies.
The below isn’t quite apples-to-apples, but perhaps it’s helpful as a jumping-off point.
Relative to baseline, marked reductions in depressive symptoms were observed for the first 5 weeks post-treatment (Cohen’s d = 2.2 at week 1 and 2.3 at week 5, both p < 0.001)… Results remained positive at 3 and 6 months (Cohen’s d = 1.5 and 1.4, respectively, both p < 0.001).
Not apples-to-apples, because a population of people with treatment-resistant depression is clearly different than a population of CFAR workshop participants. But both address a question something like “how happy are you with your life?”
Even if you add a steep discount to the Carhart-Harris 2018 effect, the effect size would still be comparable to the CFAR effect size – let’s assume that 90% of the treatment effect is an artifact of the study due to selection effects, small study size, and factors specific to having treatment-resistant depression.
Assuming a 90% discount, psilocybin would still have an adjusted Cohen’s d = 0.14 (6 months after treatment), roughly in the ballpark of the CFAR workshop effect (d = 0.17).
To explicitly separate out two issues that seem to be getting conflated:
Long-term-focused EAs should make use of the best mental health care available, which would make them more effective.
Some long-term-focused EAs should invest in making mental health care better, so that other long-term-focused EAs can have better mental health care and be more effective.
The former seems very likely true.
The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment (here, psilocybin) across some entire geography to be justified by the expected improvement in mental health care (above what already exists) for specifically long-term-focused EAs in that geography (<0.001% of the population). The math for that seems really unlikely to work out.
I continue to focus on the claims about this being a good long-term-focused intervention because that’s what is most relevant to me.
-----
Non-central notes:
We’ve jumped from emotional blocks & unhelpful personal narratives to life satisfaction & treatment-resistant depression, which are very different.
As you note, the two effects you’re now comparing (life satisfaction & treatment-resistant depression) aren’t really the same at all.
I don’t think that straightforwardly comparing two Cohen’s d measurements is particularly meaningful when comparing across effect types.
I’m not arguing against trying to compare things. I was saying that the comparison wasn’t informative. Comparing dissimilar effects is valuable when done well, but comparing d-values of different effects from different interventions tells you very little.
Probably the crux here is that I think rationality training & the psychedelic experience can achieve similar kinds of behavior change (e.g. less energy spent on negative self-talk & unhelpful personal narratives) such that their effect sizes can be compared.
Whereas you think that rationality training & the psychedelic experience are different enough that believable comparison isn’t possible.
The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment
Does this mean you think that projects like CFAR & Paradigm Academy shouldn’t be associated with the EA plank?
… specifically long-term-focused EAs in that geography (<0.001% of the population). The math for that seems really unlikely to work out.
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved. (See section 3(a) of the OP.)
The marginal value of each additional value-aligned + capable long-termist is probably quite high.
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved.
Pointing out that there are two upsides is helpful, but I had just made this claim:
The math for [the bold part] seems really unlikely to work out.
It would be helpful if you could agree with or contest with that claim before we move on to the other upside.
-
Rationality projects: I don’t care to arbitrate what counts as EA. I’m going to steer clear of present-day statements about specific orgs, but you can see my donation record from when I was a trader on my LinkedIn profile.
It would be helpful if you could agree with or contest with that claim before we move on to the other upside.
Right. I’m saying that the math we should care about is:
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor + effect from short-termist benefits
I think that math is likely to work out.
Given your priors, we’ve been discounting “effect from short-termist benefits” to 0.
So the math is then:
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor
And I think that is also likely to work out, though the case is somewhat weaker when we discount short-termist benefits to 0.
(I also disagree with discounting short-termist benefits to 0, but that’s doesn’t feel like the crux of our present disagreement.)
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor
Let’s go. Upside 1:
effect from boosting efficacy of current long-termist labor
Adding optimistic numbers to what I already said:
Let’s say EAs contribute $50m† of resources per successful drug being rolled out across most of the US (mainly contributing to research and advocacy). We ignore costs paid by everyone else.
This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn’t trade off against the rollout of any other important drug.
At any one time, about 100 EAs†† use the now-well-understood, legal drug, and their baseline productivity is average for long-term-focused EAs.
This improves their productivity by an expected 5%††† vs alternative mental health treatment.
Bottom line: your $50m buys you about 100 x 5% x 3 = 15 extra EA-years via this mechanism, at a price of $3.3m per person-year.
Suppose we would trade off $300k for the average person-year††††. This gives a return on investment of about $300k/$3.3m = 0.09x. Even with optimistic numbers, upside 1 justifies a small fraction of the cost, and with midline estimates and model errors I’d expect more like a ~0.001x multiplier. Thus, this part of the argument is insignificant.
-----
Also, I’ve decided to just reply to this thread, because it’s the only one that seems decision-relevant.
† Various estimates of the cost of introducing a drug here, with a 2014 estimate being $2.4bn. I guess EAs could only cover the early stages, with much of the rest being picked up by drug companies or something. †† Very, very optimistically, 1,000 long-term-focused EAs in the US, 10% of the population suffer from relevant mental health issues, and all of them use the new drug. ††† This looks really high but what do I know. †††† Pretty made up but don’t think it’s too low. Yes, sometimes years are worth more, but we’re looking at the whole population, not just senior staff.
Let’s say EAs contribute $50m†… Various estimates of the cost of introducing a drug here, with a 2014 estimate being $2.4bn. I guess EAs could only cover the early stages, with much of the rest being picked up by drug companies or something.
An EA contribution of far less than $50m would be leveraged.
The $2.4bn estimate doesn’t apply well to psychedelics, because there’s no cost of drug discovery here (the drugs in question have already been discovered).
As a data point, MAPS has shepherded MDMA through the three phases of the FDA approval process with a total spend of ~$30m.
This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn’t trade off against the rollout of any other important drug.
The current most important question for legal MDMA & psilocybin rollout in the US is not when, but at what quality. We’re at a point where the FDA is likely (>50% chance) going to reschedule these drugs within the next 5 years (both have received breakthrough therapy designation from the FDA).
Many aspects of how FDA rescheduling goes are currently undetermined (insurance, price, off-label prescription, setting in which the drugs can be used). A savvy research agenda + advocacy work could tip these factors in a substantially more favorable direction than would happen counterfactually.
Doing research & advocacy in this area scales fairly linearly (most study designs I’ve seen cost between $50k-$1m, advocates can be funded for a year for $60-$90k).
Very, very optimistically, 1,000 long-term-focused EAs in the US, 10% of the population suffer from relevant mental health issues, and all of them use the new drug.
From the OP:
The 2019 Slate Star Codex reader survey offers some data here: 17.4% of survey respondents have a formal diagnosis of depression (another 16.7% suspect they are depressed but haven’t been diagnosed); 12.6% of respondents have a formal diagnosis of anxiety (another 18.7% suspect they have anxiety but haven’t been diagnosed).
So somewhere between 34.1% − 65.4% of SSC readers report having a relevant mental health issue (depending on how much overlap there is between the reports of anxiety & reports of depression).
I think SSC readers are an appropriate comparison class for long-term-focused EAs.
That said, I agree with the thrust of this part of your argument. There just aren’t very many people working on long-termist stuff at present. Once all of these people are supported by a comfortable salary, it’s not clear that further spend on them is leveraged (i.e. not clear that there’s a mechanism for converting more money to more research product for the present set of researchers, once they’re receiving a comfortable salary).
So perhaps the argument collapses to:
effect from increasing the amount of long-termist labor + effect from short-termist benefits
And because of your priors, we’re discounting “effect from short-termist benefits” to 0.
I still propose that:
effect from increasing the amount of long-termist labor
is probably worth it.
Doesn’t feel like a stretch, given that this mechanism underpins the case for most of the public-facing work EA does (e.g. 80,000 Hours, CFAR, Paradigm Academy, Will MacAskill’s book).
This was a really interesting and well-written thread! To clarify, Milan, is your argument that psychedelics would make people more altruistic, and therefore they’d start working on protecting the long term future? I didn’t quite understand your argument from the OP.
To clarify, Milan, is your argument that psychedelics would make people more altruistic, and therefore they’d start working on protecting the long term future?
Yes, from the OP:
The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk & other mental blocks) and improving intentions (via ego dissolution changing one’s metaphysical assumptions).
...
By “changing one’s metaphysical assumptions,” I mean that the psychedelic state can change views about what the self is, and what actions constitute acting in one’s “self-interest.”
I was using “improving intentions” to gesture towards “start working on EA-aligned projects (including long-termist projects).”
(There’s a lot of inferential distance to bridge here, so it’s not surprising that it’s non-trivial to make my views legible. Thanks for asking for clarification.)
In general, I’m not sure people who have tried psychedelics are overrepresented in far future work, if you control for relevant factors like income and religious affiliation. What makes you think increasing the number of people who experience a change in their metaphysical assumptions due to psychedelic drugs will increase the number of people working on the far future?
I think psychedelics can make people more altruistic.
Unfortunately, at present I largely have to argue from anecdote, as there are only a few studies of psychedelics in healthy people (our medical research system is configured to focus predominately on interventions that address pathologies).
Lyons & Carhart-Harris 2018 found some results tangential to increased altruism – increased nature-relatedness & decreased authoritarianism in healthy participants:
Nature relatedness significantly increased (t (6)=−4.242, p=0.003) and authoritarianism significantly decreased (t (6)=2.120, p=0.039) for the patients 1 week after the dosing sessions. At 7–12 months post-dosing, nature relatedness remained significantly increased (t (5)=−2.707, p=0.021) and authoritarianism remained decreased at trend level (t (5)=−1.811, p=0.065).
Whether psychedelics make people more altruistic is one of the studies I most want to see.
---
I don’t think the psychedelic experience per se will make people more altruistic and more focused on the longterm.
I think a psychedelic experience, paired with exposure to EA-style arguments & philosophy (or paired with alternative frameworks that heavily emphasize the longterm, e.g. the Long Now) can plausibly increase altruistic concern for the far future.
---
if you control for relevant factors like income and religious affiliation
fwiw, controlling for religious affiliation may not be appropriate, because psychedelics may increase religiosity. (Another study I want to see!)
Boring answer warning!
The best argument against most things being ‘an EA cause area’† is simply that there is insufficient evidence in favour of the thing being a top priority.
I think future generations probably matter morally, so the information in sections 3(a), 3(b) and 4 matter most to me. I don’t see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is. There is info about what a causal mechanism might be, but analysis of the strength is also needed. (For example, you say that psychedelic interventions are plausibly in the same ballpark of effectiveness of other interventions that increase the set of well-intentioned + capable people. I only agree with this because you use the word ‘plausibly’, and plausibly...in the same ballpark isn’t enough to make something an EA cause area.) I think similarly about previous discussion I’ve seen about the sign and magnitude of psychedelic interventions on the long-term future. (I’m also pretty sceptical of some of the narrower claims about psychedelics causing self-improvement.††)
I did appreciate your coverage in section 4 of the currently small amount of funding and what is getting done as a result, which seems like it could form part of a more thorough analysis.†††
My amateur impression is that Michael Plant has made a decent start on quantifying near-term effects, though I don’t think anyone should take my opinion on that very seriously. Regardless of that start looking good, I would be unsurprised if most people who put less weight on future generations than me still wanted a more thorough analysis before directing their careers towards the cause.
As I said, it’s a boring answer, but it’s still my true objection to prioritising this area. I also think negative PR is a material consideration, but I figured someone else will cover that.
-----
† Here I’m assuming that ‘psychedelics being an EA cause area’ would eventually involve effort on a similar scale to the areas you’re directly comparing it to, such as global health (say ~100 EAs contributing to it, ~$10m in annual donations by EA-aligned people). If you weaken ‘EA cause area’ to mean ‘someone should explore this’, then my argument doesn’t work, but the question would then be much less interesting.
†† I think mostly this comes from me being pretty sceptical of claims of self-improvement which don’t have fairly solid scientific backing. (e.g. I do deep breathing because I believe that the evidence base is good, but I think most self-improvement stuff is random noise.) I think that the most important drivers of my intuitions for how to handle weakly-evidenced claims have been my general mathematical background, a few week-equivalents trying to understand GiveWell’s work, this article on the optimiser’s curse, and an attempt to simulate the curse to get a sense of its power. Weirdness aversion and social stuff may be incorrectly biasing me, but e.g. I bought into a lot of the weirdest arguments around transformative AI before my friends at the time did, so I’m not too worried about that.
††† I also appreciated the prize incentive, without which I might not have written this comment.
Curious for your take on this part of the OP:
I believe you when you say that psychedelic experiences have an effect of some (unknown) size on emotional blocks & unhelpful personal narratives, and that this would change workers’ effectiveness by some (unknown) amount. However, even assuming that the unknown quantities are probably positive, this doesn’t tell me whether to prioritise it any more than my priors suggest, or whether it beats rationality training.
Nonetheless, I think your arguments should be either compelling or something of a wake-up call for some readers. For example, if a reader does not require careful, quantified arguments to justify their favoured cause area†, they should also not require careful, quantified arguments about other things (including psychedelics).
† For example, but by no means exclusively, rationality training.
[Edited for kindness while keeping the meaning the same.]
Got it. (And thanks for factoring in kindness!)
There hasn’t been very much research on psychedelics for “well” people yet, largely because under our current academic research regime, it’s hard to organize academic RCTs for drug effects that don’t address pathologies.
The below isn’t quite apples-to-apples, but perhaps it’s helpful as a jumping-off point.
CFAR’s 2015 longitudinal study found:
Carhart-Harris et al. 2018, a study of psilocybin therapy for treatment-resistant depression, found:
Not apples-to-apples, because a population of people with treatment-resistant depression is clearly different than a population of CFAR workshop participants. But both address a question something like “how happy are you with your life?”
Even if you add a steep discount to the Carhart-Harris 2018 effect, the effect size would still be comparable to the CFAR effect size – let’s assume that 90% of the treatment effect is an artifact of the study due to selection effects, small study size, and factors specific to having treatment-resistant depression.
Assuming a 90% discount, psilocybin would still have an adjusted Cohen’s d = 0.14 (6 months after treatment), roughly in the ballpark of the CFAR workshop effect (d = 0.17).
To explicitly separate out two issues that seem to be getting conflated:
Long-term-focused EAs should make use of the best mental health care available, which would make them more effective.
Some long-term-focused EAs should invest in making mental health care better, so that other long-term-focused EAs can have better mental health care and be more effective.
The former seems very likely true.
The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment (here, psilocybin) across some entire geography to be justified by the expected improvement in mental health care (above what already exists) for specifically long-term-focused EAs in that geography (<0.001% of the population). The math for that seems really unlikely to work out.
I continue to focus on the claims about this being a good long-term-focused intervention because that’s what is most relevant to me.
-----
Non-central notes:
We’ve jumped from emotional blocks & unhelpful personal narratives to life satisfaction & treatment-resistant depression, which are very different.
As you note, the two effects you’re now comparing (life satisfaction & treatment-resistant depression) aren’t really the same at all.
I don’t think that straightforwardly comparing two Cohen’s d measurements is particularly meaningful when comparing across effect types.
fwiw I think negative self-talk (a kind of emotional block) & unhelpful personal narratives are big parts of the subjective experience of depression.
Comparing dissimilar effects is a core part of EA-style analysis, right?
I’m not arguing against trying to compare things. I was saying that the comparison wasn’t informative. Comparing dissimilar effects is valuable when done well, but comparing d-values of different effects from different interventions tells you very little.
Probably the crux here is that I think rationality training & the psychedelic experience can achieve similar kinds of behavior change (e.g. less energy spent on negative self-talk & unhelpful personal narratives) such that their effect sizes can be compared.
Whereas you think that rationality training & the psychedelic experience are different enough that believable comparison isn’t possible.
Does that sound right to you?
Does this mean you think that projects like CFAR & Paradigm Academy shouldn’t be associated with the EA plank?
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved. (See section 3(a) of the OP.)
The marginal value of each additional value-aligned + capable long-termist is probably quite high.
Pointing out that there are two upsides is helpful, but I had just made this claim:
It would be helpful if you could agree with or contest with that claim before we move on to the other upside.
-
Rationality projects: I don’t care to arbitrate what counts as EA. I’m going to steer clear of present-day statements about specific orgs, but you can see my donation record from when I was a trader on my LinkedIn profile.
Isn’t much of the present discussion about “what counts as EA?”
Maybe I’m getting hung up on semantics. The question I most care about here is: “what topics should EAs dedicate research capacity & capital to?”
Does that seem like a worthwhile question?
Right. I’m saying that the math we should care about is:
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor + effect from short-termist benefits
I think that math is likely to work out.
Given your priors, we’ve been discounting “effect from short-termist benefits” to 0.
So the math is then:
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor
And I think that is also likely to work out, though the case is somewhat weaker when we discount short-termist benefits to 0.
(I also disagree with discounting short-termist benefits to 0, but that’s doesn’t feel like the crux of our present disagreement.)
Let’s go. Upside 1:
Adding optimistic numbers to what I already said:
Let’s say EAs contribute $50m† of resources per successful drug being rolled out across most of the US (mainly contributing to research and advocacy). We ignore costs paid by everyone else.
This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn’t trade off against the rollout of any other important drug.
At any one time, about 100 EAs†† use the now-well-understood, legal drug, and their baseline productivity is average for long-term-focused EAs.
This improves their productivity by an expected 5%††† vs alternative mental health treatment.
Bottom line: your $50m buys you about 100 x 5% x 3 = 15 extra EA-years via this mechanism, at a price of $3.3m per person-year.
Suppose we would trade off $300k for the average person-year††††. This gives a return on investment of about $300k/$3.3m = 0.09x. Even with optimistic numbers, upside 1 justifies a small fraction of the cost, and with midline estimates and model errors I’d expect more like a ~0.001x multiplier. Thus, this part of the argument is insignificant.
-----
Also, I’ve decided to just reply to this thread, because it’s the only one that seems decision-relevant.
† Various estimates of the cost of introducing a drug here, with a 2014 estimate being $2.4bn. I guess EAs could only cover the early stages, with much of the rest being picked up by drug companies or something.
†† Very, very optimistically, 1,000 long-term-focused EAs in the US, 10% of the population suffer from relevant mental health issues, and all of them use the new drug.
††† This looks really high but what do I know.
†††† Pretty made up but don’t think it’s too low. Yes, sometimes years are worth more, but we’re looking at the whole population, not just senior staff.
An EA contribution of far less than $50m would be leveraged.
The $2.4bn estimate doesn’t apply well to psychedelics, because there’s no cost of drug discovery here (the drugs in question have already been discovered).
As a data point, MAPS has shepherded MDMA through the three phases of the FDA approval process with a total spend of ~$30m.
The current most important question for legal MDMA & psilocybin rollout in the US is not when, but at what quality. We’re at a point where the FDA is likely (>50% chance) going to reschedule these drugs within the next 5 years (both have received breakthrough therapy designation from the FDA).
Many aspects of how FDA rescheduling goes are currently undetermined (insurance, price, off-label prescription, setting in which the drugs can be used). A savvy research agenda + advocacy work could tip these factors in a substantially more favorable direction than would happen counterfactually.
Doing research & advocacy in this area scales fairly linearly (most study designs I’ve seen cost between $50k-$1m, advocates can be funded for a year for $60-$90k).
From the OP:
So somewhere between 34.1% − 65.4% of SSC readers report having a relevant mental health issue (depending on how much overlap there is between the reports of anxiety & reports of depression).
I think SSC readers are an appropriate comparison class for long-term-focused EAs.
That said, I agree with the thrust of this part of your argument. There just aren’t very many people working on long-termist stuff at present. Once all of these people are supported by a comfortable salary, it’s not clear that further spend on them is leveraged (i.e. not clear that there’s a mechanism for converting more money to more research product for the present set of researchers, once they’re receiving a comfortable salary).
So perhaps the argument collapses to:
effect from increasing the amount of long-termist labor + effect from short-termist benefits
And because of your priors, we’re discounting “effect from short-termist benefits” to 0.
I still propose that:
effect from increasing the amount of long-termist labor
is probably worth it.
Doesn’t feel like a stretch, given that this mechanism underpins the case for most of the public-facing work EA does (e.g. 80,000 Hours, CFAR, Paradigm Academy, Will MacAskill’s book).
This was a really interesting and well-written thread! To clarify, Milan, is your argument that psychedelics would make people more altruistic, and therefore they’d start working on protecting the long term future? I didn’t quite understand your argument from the OP.
:-)
Yes, from the OP:
I was using “improving intentions” to gesture towards “start working on EA-aligned projects (including long-termist projects).”
(There’s a lot of inferential distance to bridge here, so it’s not surprising that it’s non-trivial to make my views legible. Thanks for asking for clarification.)
In general, I’m not sure people who have tried psychedelics are overrepresented in far future work, if you control for relevant factors like income and religious affiliation. What makes you think increasing the number of people who experience a change in their metaphysical assumptions due to psychedelic drugs will increase the number of people working on the far future?
I think psychedelics can make people more altruistic.
Unfortunately, at present I largely have to argue from anecdote, as there are only a few studies of psychedelics in healthy people (our medical research system is configured to focus predominately on interventions that address pathologies).
Lyons & Carhart-Harris 2018 found some results tangential to increased altruism – increased nature-relatedness & decreased authoritarianism in healthy participants:
Whether psychedelics make people more altruistic is one of the studies I most want to see.
---
I don’t think the psychedelic experience per se will make people more altruistic and more focused on the longterm.
I think a psychedelic experience, paired with exposure to EA-style arguments & philosophy (or paired with alternative frameworks that heavily emphasize the longterm, e.g. the Long Now) can plausibly increase altruistic concern for the far future.
---
fwiw, controlling for religious affiliation may not be appropriate, because psychedelics may increase religiosity. (Another study I want to see!)