Strong upvoted this one.
A prominent Buddhist monk in the Thai Forest Tradition (Ajahn Jayasaro) said the following, which I feel is highly relevant here:
Someone had asked (llama) Kohima, “What do you think of expanding minds through chemical means?” He said, if you have an ignorant mind then you just get expanded ignorance. I thought he was just on the spot. It is all within the sphere of darkness, isn’t it? You are still playing around with different modes of ignorance. You are not actually going beyond. You are not transcending. You are transcending one particular state of ignorance, but you are still in the same building, you haven’t got out of the building, you still haven’t got out of prison. So this sobriety is that whole turning away from all the strange and unusual experiences and visions, physiological, mental states that are available through chemical means and taking a delight in the simple down to earth clarity of awareness.”
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor
Let’s go. Upside 1:
effect from boosting efficacy of current long-termist labor
Adding optimistic numbers to what I already said:
Let’s say EAs contribute $50m† of resources per successful drug being rolled out across most of the US (mainly contributing to research and advocacy). We ignore costs paid by everyone else.
This somehow causes rollout about 3 years earlier than it would otherwise have happened, and doesn’t trade off against the rollout of any other important drug.
At any one time, about 100 EAs†† use the now-well-understood, legal drug, and their baseline productivity is average for long-term-focused EAs.
This improves their productivity by an expected 5%††† vs alternative mental health treatment.
Bottom line: your $50m buys you about 100 x 5% x 3 = 15 extra EA-years via this mechanism, at a price of $3.3m per person-year.
Suppose we would trade off $300k for the average person-year††††. This gives a return on investment of about $300k/$3.3m = 0.09x. Even with optimistic numbers, upside 1 justifies a small fraction of the cost, and with midline estimates and model errors I’d expect more like a ~0.001x multiplier. Thus, this part of the argument is insignificant.
Also, I’ve decided to just reply to this thread, because it’s the only one that seems decision-relevant.
† Various estimates of the cost of introducing a drug here, with a 2014 estimate being $2.4bn. I guess EAs could only cover the early stages, with much of the rest being picked up by drug companies or something.†† Very, very optimistically, 1,000 long-term-focused EAs in the US, 10% of the population suffer from relevant mental health issues, and all of them use the new drug.††† This looks really high but what do I know.†††† Pretty made up but don’t think it’s too low. Yes, sometimes years are worth more, but we’re looking at the whole population, not just senior staff.
I am confused about what exactly you are trying to communicate with this post and its partner (https://forum.effectivealtruism.org/posts/FDczXfT4xetcRRWtm/an-effective-altruist-plan-for-socialism). My sense is that you are saying something like:1) Look, socialist-leaning and capitalist-leaning EAs, the policies you probably want are essentially the same, work together and make something happen, and2) Look, EAs, I can write nearly identical posts with titles that will make you assume the posts are at odds—challenge your assumptions. Realize the power language has over you. Or maybe you primarily want engagement with the policies you are most excited about, but want comments from both socialists and capitalists, and you felt this was the best way to achieve that?I seek clarity.(I feel stupid, not being able to interpret you well, albeit only after one quick read-through. But, I think folks should typically make comments when confused, so here I am.)
If we are comparing donating blood to something potentially more effective we could do with our time, like earning a wage to donate, don’t we need to consider whether the opportunity to earn that wage is actually available? For example I don’t have the opportunity to earn money at any hour of my choosing. I get paid to work 40 hours a week, so taking two hours on the weekend to donate blood isn’t sacrificing 2 hours of wage earning because I’ve already earned the maximum I can that week.
I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I’d be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.
For instance, a lot of today’s fiction seems cynical and pessimistic about human nature; the characters frequently don’t seem to have goals related to anything other than their immediate social environment; and they often don’t pursue those goals effectively (apparently for the sake of dramatic tension). Fiction demonstrating people working effectively on ambitious, broadly beneficial goals, perhaps with dramatic tension caused by something other than humans being terrible to each other, could help propagate EA mindset.
Both the hover-over and sidenotes on gwern.net are pure JS, and require no modifications to the original Markdown or generated HTML footnotes; they just run and modify the appearance clientside and degrade to the original footnotes if JS is disabled.
Have you heard of Harry Potter and the Methods of Rationality (http://www.hpmor.com/) and/or http://unsongbook.com ? I think they serve some of this role for the community already.
It’s interesting they are both long-form web fiction; we don’t have EA tv shows or rock bands that I know of.
Argument in OP:
Interventions that increase the set of well-intentioned + capable people also seem quite robust to cluelessness, because they allow for more error correction at each timestep on the way to the far future.
The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk & other mental blocks) and improving intentions (via ego dissolution changing one’s metaphysical assumptions).
I view this as a weak argument. I think one could make this sort of argument for a large number of interventions: reading great literature, yoga, a huge number of productivity systems, participating in healthy communities, quantified self, volunteering for local charities like working at a soup kitchen, etc. Some of these interventions focus more on the increasing capability aspect (productivity systems, productivity systems) and some focus more on improving intentions (participating in healthy communities, volunteering). Some focus on both to some degree.
The reason it seems like a weak argument to me is because:
(a) the average effects of psychedelics on increasing capability seem unlikely to be strong. They may be high for a small percentage of people, but I’m not aware of any particularly strong reason to think that the average effects are large.
They may be large for people with mental health issues, but then it’s not really an intervention for increasing capability in general, it’s a mental health intervention. These are distinct, and as I said above, psychedelics could plausibly be a top intervention for mental health.
(b) The improving intentions aspect looks to be on even shakier grounds. What is the evidence that taking psychedelics is an effective treatment for improving intentions in a manner relevant to working on the long term? I’ve never heard of any psychedelic or spiritual community being focused on long termism in an EA relevant manner. Some people report ego dissolution, but I’m not even aware of any anecdotal reports that ego dissolution led to non-EAs thinking and working on long term things. It sounds like you know some cases where it may have been helpful, but I’m skeptical that a high quality study would report something amazing.
No, I expected that no rigorous research had been done on NLP as of 2014, and I don’t know how rigorous the more recent research has been.
I like your encouragement to create more art. However, I noticed cringing at some of your ideas in the appendix. I worry that they would end up being “poorly executed cultural artefacts [that] may put EA into disrepute” as you put it.
I do not feel capable of explaining exactly where the cringe reaction is coming from, but a few examples:
I do not like the idea in Beautopia of equating physical appearance with moral goodness, given that a) it is already an issue that people assume positive personality traits when they see physically attractive people and b) it assumes there is some objective and real “good” that can be calculated. And the final plot line implying that it is good to kill people we think are evil seems like a bad meme to spread.
Dead baby currency seems overly simplistic and insensitive, although I am having a hard time putting words to why. It also triggers scrupulosity concerns (for example, see http://www.givinggladly.com/2012/03/tradeoffs.html ).
Finally, I am wary of how you refer to “Africa” monolithically. For more, see https://www.theatlantic.com/international/archive/2013/08/confusing-country-continent-how-we-talk-about-africa/311621/.
Probably the crux here is that I think rationality training & the psychedelic experience can achieve similar kinds of behavior change (e.g. less energy spent on negative self-talk & unhelpful personal narratives) such that their effect sizes can be compared.
Whereas you think that rationality training & the psychedelic experience are different enough that believable comparison isn’t possible.
Does that sound right to you?
Rationality projects: I don’t care to arbitrate what counts as EA.
Isn’t much of the present discussion about “what counts as EA?”
Maybe I’m getting hung up on semantics. The question I most care about here is: “what topics should EAs dedicate research capacity & capital to?”
Does that seem like a worthwhile question?
It would be helpful if you could agree with or contest with that claim before we move on to the other upside.
Right. I’m saying that the math we should care about is:
effect from boosting efficacy of current long-termist labor + effect from increasing the amount of long-termist labor + effect from short-termist benefits
I think that math is likely to work out.
Given your priors, we’ve been discounting “effect from short-termist benefits” to 0.
So the math is then:
And I think that is also likely to work out, though the case is somewhat weaker when we discount short-termist benefits to 0.
(I also disagree with discounting short-termist benefits to 0, but that’s doesn’t feel like the crux of our present disagreement.)
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved.
Pointing out that there are two upsides is helpful, but I had just made this claim:
The math for [the bold part] seems really unlikely to work out.
Rationality projects: I don’t care to arbitrate what counts as EA. I’m going to steer clear of present-day statements about specific orgs, but you can see my donation record from when I was a trader on my LinkedIn profile.
I’m not arguing against trying to compare things. I was saying that the comparison wasn’t informative. Comparing dissimilar effects is valuable when done well, but comparing d-values of different effects from different interventions tells you very little.
If each of them donated an average of U$ 1 for this cause, they would match all of GDs transfers in 2017
fwiw I think it’s very hard to get people to donate to things.
From section 4(b) of the OP: “Roughly $40 million has been committed to psychedelic research since 2000.”
Got it, thanks.
So, I estimate GD results in U$200/QALY…
Enthea’s estimate of psychedelics liberalization is of $472/DALY
As far as I know, GiveWell considers cost-effectiveness estimates as informative for efficacy differences that are orders of magnitude apart.
For two interventions that are on the same order of magnitude, the analyses aren’t granular enough to believably inform which is more effective.
OK, I realised the flaw in my argumentation. If I have 1000 GBP to give away, I could either ‘walk’ 1000 GBP in direction of charity x or 1000 GBP in direction of charity y but only sqrt(x^2 + y^2) in a combination of x and y, e.g. the maximal gradient. The optimal allocation (x, y) of money is what maximises the scalar product of gradient (dU/dx, dU/dy) * (x, y) under the restriction that x + y = 1000. If dU/dx = dU/dy a 50⁄50 allocation as good as an allocation of all money to the most effective charity. Otherwise giving all money to the most effective charity maximises utility. Sorry for the confusion and thanks for the discussion.
We’ve jumped from emotional blocks & unhelpful personal narratives to life satisfaction & treatment-resistant depression, which are very different.
fwiw I think negative self-talk (a kind of emotional block) & unhelpful personal narratives are big parts of the subjective experience of depression.
As you note, the two effects you’re now comparing (life satisfaction & treatment-resistant depression) aren’t really the same at all.
Comparing dissimilar effects is a core part of EA-style analysis, right?
The latter seems very likely false. You would need the additional cost of researching, advocating for and implementing a specific new treatment
Does this mean you think that projects like CFAR & Paradigm Academy shouldn’t be associated with the EA plank?
… specifically long-term-focused EAs in that geography (<0.001% of the population). The math for that seems really unlikely to work out.
Psychedelic interventions seem promising because they can plausibly increase the number of capable people focused on long-termist work, in addition to plausibly boosting the efficacy of those already involved. (See section 3(a) of the OP.)
The marginal value of each additional value-aligned + capable long-termist is probably quite high.