I definitely read the post as suggesting implicitly that EAs should consider going on the retreat. What would be the point of the post otherwise? There is some discussion of psychedelics in general, but that doesn’t seem to be the primary purpose.
I’m concerned you’re defending a straw man - did anyone ever claim 80k’s list was true for every single possible person? I don’t think so and such a claim would be implausible.
As an anecdote, I’ve always read their list and recommendations as applying to their target audience of talented graduates of elite Western colleges.
OP neglected to mention that the retreat costs $1700 according to the website. Neither does there seem to be some kind of financial aid plan or discount for EAs, like CFAR does.
The link didn’t work properly for me. Did you mean the following comment?
We’re also working on understanding invertebrate sentience and wild animal welfare—maybe not “cause X” because other EAs are aware of this cause already, but I think will help unlock important new interventions.
Additionally, we’re doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not “cause X” because EAs are already aware of it.
Lastly, we’re also working on examining ballot initiatives and other political methods of achieving EA aims—maybe not cause X because it isn’t a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.
Can you expand on this answer? E.g. how much this is a focus for you, how long you’ve been doing this, how long you expect to continue doing this, etc.
As an extreme example, the Young Adult fiction community has recently seen multiple authors cancel their completed and to-be-published books based on allegations that would not be taken very seriously in EA or most communities. One example is detailed in Slate, where Amelie Zhao’s anticipated book, Blood Heir, was essentially retracted by the author after completion but before publication because of social media pressure stemming from flimsy-seeming accusations of racial insensitivity and plagiarism.
To be clear, I do not think it is plausible that Jacy is wholly innocent. Persistent accusations going back to him getting expelled from college seem quite likely to be rooted in some level of harmful behavior. But I don’t think Jacy apologizing and stepping back from public life is strong evidence of anything—it seems to me that he would likely do that even if he thought he had only committed minor misdeamoners. CEA’s response seems like stronger evidence of harmful behavior to me.
Julia Wise clarified this in her reply elsewhere in this comment section:
The accusation of sexual misconduct at Brown is one of the things that worried us at CEA. But we approached Jacy primarily out of concern about other more recent reports from members of the animal advocacy and EA communities.
I don’t know much about investing, but a couple of quick comments might be helpful:
I understand many people knowledgable about about investing have thought they could beat the market and were wrong, but how many people were both knowledgeable about investing and about rationality but were still wrong? Given how few rationalists there are, I doubt there have been many.
Is there any empirical reason to think that knowledge about ‘rationality’ is particularly helpful for investing?
If we assign a 1⁄3 chance of the strategy beating the market by 3% and otherwise matches the market
1⁄3 chance seems possibly orders of magnitude too high to me.
I also perceived the personalized email as indicating a reasonable (30-50%+) chance of getting hired if I applied. I certainly didn’t perceive it as indicating the 10% or perhaps even lower chance it seems to be after reading this thread. It was only after a couple of my friends also got similar emails that I realized that Open Phil was probably sending personalized emails to dozens, if not hundreds, of applicants.
Something that may be hard for Open Phil to know is that it felt really flattering for me to get a personalized email from one of the most prestigious EA orgs asking me to apply. It’s sort of like if Harvard sent me an email saying that they’d seen my resume and thought I would be a good fit for them because of X, Y, and Z (all of which happened to be factors I personally also thought I was a good fit for Harvard). That may have caused me to overestimate my chances, and also would probably have led to me being more disappointed than otherwise if I had been rejected.
A meta point: A lot of the discussion here has focused on reducing the time spent applying. I think a more fundamental and important problem, based on the replies here and my own experiences, is that many, many EAs feel that either they’re working at a top EA org or they’re not contributing much. Since only a fraction of EAs can currently work at a top EA org due to supply vastly exceeding demand, even if the time spent applying goes down a lot, many EAs will end up feeling negatively about themselves and/or EA when they get rejected. See e.g. this post by Scott Alexander on the message he feels he gets from the community. A couple of excerpts below:
It just really sucks to constantly have one lobe of my brain thinking “You have to do [direct work/research], everybody is so desperate for your help and you’ll be letting them down if you don’t”, and the other lobe thinking “If you try to do the thing, you’ll be in an uphill competition against 2,000 other people who want to do it, which ends either in time wasted for no reason, or in you having an immense obligation to perform at 110% all the time to justify why you were chosen over a thousand almost-equally-good candidates”.
So instead I earn-to-give, and am constantly hit with messages (see above caveat! messages may not be real!) of “Why are you doing this? Nobody’s funding-constrained! Money isn’t real! Only talent constraints matter!” while knowing that if I tried to help with talent constraints, I would get “Sorry, we have 2,000 applicants per position, you’re imposing a huge cost on us by even making us evaluate you”.
To add another anecdote, my story is broadly similar to your story as well: Top college, focused on EA, was particularly well informed on long termist topics, did plenty of EA projects, got good feedback from EAs, and now have fairly increased anxiety and depression about my ability to contribute to long termism that I didn’t have before. I haven’t applied to many EA jobs, but a similar thing would probably happen to me as well if I did.
Thanks! I don’t disagree. Btw the link to the Remembering self is dead.
I’m glad I read this piece. It makes a good point!
Can you expand on the connection to EA? I’m not sure I quite see it.
Thanks for this great review. It helps outsiders understand how different EA groups and social scenes work.
Do you have estimates for how many people are involved in different groups and overall in Boston? Potentially for different levels of involvement? E.g. 30 hardcore/dedicated (whatever word seems best) EAs, 100 casual EAs.
Future Perfect put out an article on this recently.
Clearly both metaphors do work – I’m wondering which is better to cultivate on the margin.
My intuition is that it’s better to lean on the image of intellectual work as exploration; curious what folks here think.
I’m a bit unclear on the question exactly. You ask which metaphor is better to cultivate on the margin, but I’m not sure for whom or for what purpose. Both metaphors seem clearly true to some extent to me, and which metaphor it fits more depends a lot on individuals and fields IMO.
Dylan Matthews’ claim that nuclear war would cause “much or all of humankind” to suddenly vanish is unsubstantiated. The idea that billions of people worldwide will die from nuclear war is not supported by models with realistic numbers of nuclear warheads. “Much” is a very vague term, but speculation that every (or nearly every) human will die is a false alarm. Now that is easy to forgive, as it’s a common belief within EA anyway and probably someone will try to argue with me about it.
Could you expand on this or give sources? I do hear EAs talking about nuclear war and nuclear winter being existential threats.
I think Vox, Ezra Klein, Dylan Matthews etc would disagree about point 2. Not to put words in someone else’s mouth, but my sense is that Ezra Klein doesn’t think that their coverage is substantially flawed and systematically biased relative to other comparable sources. He might even argue that their coverage is less biased than most sources.
Could you link to some of the criticisms you mentioned in point 1? I’ve seen others claim that as well on previous EA Forum posts about Future Perfect, and I think it would be good to have at least a few sources on this. Many EAs outside the US probably know very little about Vox.
How many fellows do you plan to accept?
This is a really great and helpful post. Thanks so much for running it, trying to evaluate its impact, and writing it up!