I think it would be misleading if OP had said ‘substantial proportion’. I read ‘substantial number’ as a comment on the absolute numbers, which is vague (how many is ‘substantial’) but not misleading.
Yup. In which case, it is a ‘big list’ for such folks.
Though I am saying that 80,000 Hours’ research can’t offer a single, definite ranking of what is best for everyone to do, that doesn’t mean that their research isn’t very useful for people figuring out what it is best for them to do
Well, they do offer A list of the most urgent global problems. I’ll grant this isn’t a list of what it is best for everyone to do, but it is (plausibly, from their perspective) a list of what it is best for most people to do (or ‘most EAs’ or some nearby specification). Indeed, given 80k has a concept of ‘personal fit’, which is distinct from their rating of the problems, the natural reading of the list is that it provides a general, impersonal ranking of where (average?) individuals can do the most good.
I’m concerned you’re defending a straw man - did anyone ever claim 80k’s list was true for every single possible person? I don’t think so and such a claim would be implausible.
A couple of comments
Almost everyone in EA holds either a longtermist view or a person-affecting view
This puzzled me slightly. One reason is that longtermism and person-affecting views are different categories; the former is a view about where, in practice, value lies and the latter is a view about where, in theory, value lies. You could be a totalist (all possible people matter), which is not a person-affecting view, but be a near-termism. I think a better set up would have been: ‘psychedelics look good whether you just value the near-term or the long-term’. I suppose that leaves out the ‘medium-termists’, but I don’t know how many people there are who hold this view, whatever it is, inside or outside EA.
Also robust: interventions that increase the set of well-intentioned + capable people
CFAR & Paradigm Academy are aimed at this
The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk & other mental blocks) and improving intentions (via ego dissolution changing one’s metaphysical assumptions)
I would like you to say more about this. It seems plausible to me that training rationality is orders of magnitude more impactful for the longrun, so this is an objection to counter.
under a longtermist view, psychedelic interventions are plausibly in the same ballpark of effectiveness as x-risk interventions
I don’t think you’ve shown this. It’s more plausible to me that Xrisk is a top tier intervention and rationality and the ‘mindset-changingness’ of psychedelics are in the lower tiers. This would still make them potentially very interesting from a long-termist perspective—in the bucket of ‘things to do take seriously and possibly fund if X-risk has absorbed as many resources as it can’.
Just FYI, I wrote a mammoth series of articles on drug policy reform 18 months or so ago where I argued that psychedelics for mental health looks very promising from the near term perspective. In other words, I explicitly claim what you’re claiming! I haven’t had a chance to do more work on it since and I add the usual caveats about not necessarily agree with everything past-Michael wrote.
Also, just because psychedelics are promising as a category of intervention, it doesn’t follow that setting up a retreat of this kind is the best way to go within that (sub)cause area. You’d need to argue for that too.
This post did not convince me that the business was created ‘for EA reasons.’
I think this is uncharitable and I gave small downvote as a result. Given those involved in this business are involved in the EA community and there is at least a plausible story to tell about why this is impactful, you’re essentially claiming accusing the OP of acting in bad faith when there isn’t compelling reason to do so.
And contrary to Forum standards, it was written to persuade, not to inform
I reread this and didn’t notice that it was written to persuade vs inform.
otherwise why would there be no studies listed that found no effect or a negative effect?
I’ve been looking at the research of psychedelics for a while—see https://forum.effectivealtruism.org/posts/wu9nEXWtvhEnYQTxG/high-time-for-drug-policy-reform-part-1-4-introduction-and and the other three posts in the series. I can’t recall a study claiming psychedelics have no or negative effects. I agree that is potentially suspicious, but it’s in line with my view that they have positive effects and there isn’t much research on this.
But I don’t know any practising doctors in the EA community, so this is definitely the wrong place to advertise
Again, I think it’s bad faith to assume the purpose is simply trying to make money from the participants of this forum. I think it’s fine, good even, for people in the community to tell others what they are doing. Where else is one supposed to make these announcements?
So it doesn’t seem to be that there’s any insoluble tension between taking account of individual difference and communicating the same message to a broad audience
I don’t think the tension is between those things. The tension is between saying ‘our research is useful: it tells (group X) of people what it is best for them to do’ and ‘our research does not offering a definite ranking of what it is before for people to do (whether people in group X or otherwise)’. I don’t think you can have this both ways.
While this isn’t entirely personalized (it’s based only on certain attributes that 80,000 Hours highlights), it’s also far from a single, definitive list
Then it seems reasonable to interpret it as (an attempt at) a definitive list if you have those attributes.
I understand why the author is arguing that 80k doesn’t offer a big list but I think that argument is undermines the claim that 80k is useful (“Hey, we’re not telling anyone what to do?” “Really? I thought that was the point”)
80,000 Hours’ research does not and cannot yield a “big list” of the best career paths, because no such thing exists. Instead, we should use 80,000 Hours content to map out our own personal lists and figure out how to do the top things on them.
These two sentences seem to be in a lot of tension. If giving advice about which careers did the most good were entirely personal, then it necessarily follows that you could make no general recommendations at all about which careers are better in terms of impact and therefore 80k should stop what they are doing. However, if you can make general recommendations and thus say which careers have more impact that others, then there is a ‘big list’ after all.
We might disagree about who this is a ‘big list’ for—the average person, an omni-skilled graduate of a top university, the average reader of 80k’s content—but however we fill that out, it’s still possible to see it as a ‘big list’.
I’m entirely with you that it doesn’t make sense to feel bad if someone else can do more good than you. The aim is to do the most good you can do, not the most good someone else who isn’t you can do. Despite recognising this on a conceptual level, I still find it hard to believe and often feel guilty (or shame or sadness) when I think of people whose ‘altruistic successfulness’ surpasses mine.
Hello Kris. Can you say what type of people you think should be spending their time doing this? I like the idea, but it seems like a lot of effort for someone who is not already someone plugged into these networks and has a professional interest in the area.
I also think having David Clark speak at events is a scaleable solution!
Thanks for writing this up. Possibly a pedantic comment, but aren’t Outside and Weird the same? I can’t see how my strategies would differ if I was pursuing one rather than the other.
Thanks for writing this up. Great to see people testing things and then adjusting their plans in light of the results.
This is probably a relatively minor question but this wasn’t something you mentioned so I thought I’d ask: was transportation a problem in people getting to the advanced workshops? I can imagine that, if the a student needed to be driven to the workshop, that would make it much harder to attend.
On the second, the obvious counterargument is that it applies just as well to e.g. murder; in the case where the person is killed, “there is no sensible comparison to be made” between their status and that in the case where they are alive
Person-affecting views are those will hold not all possible people matter. Once you’ve decided who matters (the present, necessary or actual people), it’s then a different question how you think about the badness of death for those that matter. You can say creating people isn’t good/bad, but it’s still bad if already existing people die early. FWIW, I also find Epicureanism about the badness of death rather plausible, i.e. I don’t think we compare the value of living longer for someone. I recognise this makes me something of a ‘moral hipster’ but I think the arguments for it are pretty good, although I won’t get into that here. As such, I think death, whether by murder or other means, isn’t bad for someone. I think we tend to have the intuition that murder is wrong over and above what it deprives the deceased from, which it why we think it’s just as wrong to murder someone with 1 month vs 10 years left to live. hence I think you’re getting at a deontological intuition, not one about value.
I find the stuff about posthumous harms and benefits very implausible. If Socrates wants us to say ‘Socrates’ and we do, does it really make his life go better?
I agree this makes more sense in terms of mission hedging
I don’t think my argument here is analogous to trying to beat the market. (i.e. I’m not arguing that AI research companies are currently undervalued.)
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
Roger. Points taken.
Another thing I’d be interested in seeing would be the percentage changes in support for causes year-on-year as that would indicate what the internal dynamics of the movement are. I’m (at least) partly motivated to see this because mental health, which I’ve written quite a lot on, may be the smallest top priority cause, but this is also the first time it’s snuck into the list.
Thanks for this. Were there any causes you considered adding beyond those stated? Those seems like the main causes EAs support, but it would be nice to include ‘minor’ ones to see what the community feeling is about those, e.g. wild animal suffering, education, social justice, immigration reform, etc.
Yes, if the chance of death each year is constant it turns out that remaining life expectancy is around 1/chance of death
Can you explain this is the case? Sorry if this is obvious, but I’m not getting it and can’t think offhand how to do the maths.
On population ethics, for totalists it then seems the dominating concern will be how valuable it is to have a population with longer lives, which puts the emphasis in a difference place from the value of keeping particular individuals alive longer.
Thanks for writing this.
Can you explain in a bit more detail, and without complicated formalisation, why life expectancy after LEV is 1000. I note life expectancy is 1000 and the chance of death in 1 year is 1/1000. Is that a coincidence, or is life expectancy post-LEV just 1/annual chance of death?
I know you’ve said you’re going to cover this later, but I want to flag how sensitive this is to population ethics. On totalism (the value of the outcome is the sum total of well-being of everyone who will ever life), it’s good to create lives, so it’s not necessarily a problem that there’s a higher ‘turnover’ of lives, i.e. people die and other people replace them. Totalists will want to know how longevity affects the long run for everyone, not just those that get to live longer. By contrast, if you’re a person-affecting deprivationism (there is no value creating new lives, but for those lives that count, the badness of death is the amount of well-being they would have had had they lived), life extension looks super important!
Relevant to this, in the following article MacAskill provides the following account of what EA is:
What Is Effective altruism?
As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis.11 So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good. There are some defining characteristics of the effective altruist research project. The project is:
Maximizing. The point of the project is to try to do as much good as possible.
Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models.
Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals.
Impartial. Everyone’s welfare is to count equally
Also, you’ve accidentally posted the same thing three times, if you hadn’t noticed already.