Iāve had calls with >30 people who are interested in things like testing their fit for EA-aligned research careers, writing on the Forum, āgetting up to speedā on areas of EA, etc. (This is usually during EA conferences.)
I gradually collected a set of links and notes that I felt that many such people would benefit from seeing, then turned that into a Google Doc. Many people told me they found that doc useful, so Iām now (a) sharing it as a public post, and (b) still entertaining the hypothesis that those people were all just vicious liars and sycophants, of course.
Disclaimers
Not all of these links/ānotes will be relevant to any given person
These links/ānotes are most relevant to people interested in (1) research roles, (2) roles at explicitly EA organisations, and/āor (3) longtermism
But this just because thatās what I know best
There are of course many important roles that arenāt about research or arenāt at EA orgs!
And Iām happy with many EAs prioritising cause areas other than longtermism
But, in any case, some of the links/ānotes will also be relevant to other people and pathways
This doc mentions some orgs I work for or have worked for previously, but the opinions expressed here are my own, and I wrote the post (and the doc it evolved from) in a personal capacity
This has great writing tips that definitely apply on the Forum, and ideally would apply everywhere, but unfortunately they donāt perfectly align with the norms in some areas/āfields
ādeep dive into seminal papers/āblog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.ā (see Linchās shortform)
I also encourage you to consider reading, commenting, and posting on LessWrong
Sometimes people worry that a post idea might be missing some obvious, core insight, or just replicating some other writing you havenāt come across. I think this is mainly a problem only inasmuch as it couldāve been more efficient for you to learn things than slowly craft a post.
So if you can write (a rough version of) the post quickly, you could just do that.
Or you could ask around or make a quick Question post to outline the basic idea and ask if anyone knows of relevant things you should read.
Note: I know less about what the opportunities at the Center for Reducing Suffering and the Nonlinear Fund would be like than I know about what the other opportunities would be like, so Iām not necessarily able to personally endorse those two opportunities.
Perhaps ideally with a researcher supervisor whoās an excellent researcher themselves, and who is either working on high-priority topics or is flexible enough to accommodate you working on high-priority topics
This sequence is targeted more at people who could help improve the pipeline than at people aiming to enter and move through that pipeline, but various parts could still be useful to the latter group of people.
One key piece of career advice I want to highlight is that people often apply to too few things, donāt make enough ambitious applications, and/āor donāt make enough āsafe applicationsā
I think itās generally good to apply to a lot of things, including both ambitious and safe options
(But of course, any piece of general career advice will have some exceptions)
Notes on EA-related research, writing, testing fit, learning, and the Forum
UPDATE: I now consider my 2022 Interested in EA/ālongtermist research careers? Here are my top recommended resources a better starting point than this older post, but this post might be useful after youāve read that 2022 one.
Cross-posted to LessWrong.
Iāve had calls with >30 people who are interested in things like testing their fit for EA-aligned research careers, writing on the Forum, āgetting up to speedā on areas of EA, etc. (This is usually during EA conferences.)
I gradually collected a set of links and notes that I felt that many such people would benefit from seeing, then turned that into a Google Doc. Many people told me they found that doc useful, so Iām now (a) sharing it as a public post, and (b) still entertaining the hypothesis that those people were all just vicious liars and sycophants, of course.
Disclaimers
Not all of these links/ānotes will be relevant to any given person
These links/ānotes are most relevant to people interested in (1) research roles, (2) roles at explicitly EA organisations, and/āor (3) longtermism
But this just because thatās what I know best
There are of course many important roles that arenāt about research or arenāt at EA orgs!
And Iām happy with many EAs prioritising cause areas other than longtermism
But, in any case, some of the links/ānotes will also be relevant to other people and pathways
This doc mentions some orgs I work for or have worked for previously, but the opinions expressed here are my own, and I wrote the post (and the doc it evolved from) in a personal capacity
Regarding writing, the Forum, etc.
Why you (yes, you) should post on the EA Forum (a talk by Aaron Gertler)
How you can make an impact on the EA Forum (another talk by Aaron Gertler)
Aaron also made a companion document which I think is useful even if you donāt watch the talk
Feedback available for EA Forum drafts (a post by that Aaron Gertler guy Iāve been hearing so much about lately)
Effective Altruism Editing and Review
Reasoning Transparency
This has great writing tips that definitely apply on the Forum, and ideally would apply everywhere, but unfortunately they donāt perfectly align with the norms in some areas/āfields
Readings and notes on how to write/ācommunicate well
Reasons for and against posting on the EA Forum
Regarding types of writing you can do:
Write about any of the research ideas in the links in the next section
Write summaries and/āor collections
Write book reviews (see Buckās suggestion, my suggestion, and posts tagged books)
ādeep dive into seminal papers/āblog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.ā (see Linchās shortform)
I also encourage you to consider reading, commenting, and posting on LessWrong
See Welcome to LessWrong! for a great introduction to that site
Sometimes people worry that a post idea might be missing some obvious, core insight, or just replicating some other writing you havenāt come across. I think this is mainly a problem only inasmuch as it couldāve been more efficient for you to learn things than slowly craft a post.
So if you can write (a rough version of) the post quickly, you could just do that.
Or you could ask around or make a quick Question post to outline the basic idea and ask if anyone knows of relevant things you should read.
Research ideas
Research questions that could have a big social impact, organised by discipline
A central directory for open research questions
Crucial questions for longtermists
Some history topics it might be very valuable to investigate
This is somewhat less noteworthy than the other links
Programs, approaches, or tips for testing fit for (longtermism-related) research
Programs
Not all of these things are necessarily āopenā right now.
Here are things I would describe as research training programs (in alphabetical order to avoid picking favourites):
ALLFED volunteering
Center for Reducing SufferingāāGet involvedā
CLR Summer Research Fellowship
Fellowship ā Legal Priorities Project
FHI Summer Research Fellowship
GCRI call for collaborators/āadvisees
Governance of AI FellowshipāFuture of Humanity Institute
GPI Early Career Conference Program & Forethought Global Priorities Fellowship
Nonlinear Fund
SERI (Stanford Existential Risk Initiative) Summer 2021 Research Fellowship
Rethink PrioritiesāResearch Internship
I have a more thorough, less curated list here
Note: I know less about what the opportunities at the Center for Reducing Suffering and the Nonlinear Fund would be like than I know about what the other opportunities would be like, so Iām not necessarily able to personally endorse those two opportunities.
Other things
Effective Thesis
See in particular Resources page
Consider doing a graduate degree?
Perhaps ideally with a researcher supervisor whoās an excellent researcher themselves, and who is either working on high-priority topics or is flexible enough to accommodate you working on high-priority topics
You could contact Effective Thesis for advice on this
Offer to be a research assistant for someone doing great work?
See Matthew Van Der Merweās thoughts on RA roles
See my Notes from a call with someone whoās an RA to a great researcher
These roles can sometimes be done remotely
Annotated List of Project Ideas & Volunteering ResourcesāEA Forum
Grants
LTFF
CLR
SAF
List of EA funding opportunities
Things I often tell people about applying to EA Funds
Undergraduate thesis prize ā Forethought Foundation
Improving the EA-aligned research pipeline: Sequence introduction
This sequence is targeted more at people who could help improve the pipeline than at people aiming to enter and move through that pipeline, but various parts could still be useful to the latter group of people.
Readings and notes on how to do high-impact research
Readings and notes on how to do high-quality, efficient research
Collection of collections of resources relevant to (research) management, mentorship, training, etc.
Getting āup to speedā on EA, longtermism, x-risks, etc.
The Precipice
Crucial questions for longtermists
A ranked list of all EA-relevant (audio)books Iāve read
A list of EA-related podcasts
Where to find EA-related videos
What are some good online courses relevant to EA?
How I personally āgot up to speedā
You can search the EA Forum tags page for relevant tags, read those pages (since theyāre also Wiki entries), and use them as directories to useful posts, authors to talk to, etc.
I now make and use Anki cards, and would encourage other people to consider doing so too
Other
Career planning: An in-depth process & template (80,000 Hours)
One key piece of career advice I want to highlight is that people often apply to too few things, donāt make enough ambitious applications, and/āor donāt make enough āsafe applicationsā
I think itās generally good to apply to a lot of things, including both ambitious and safe options
(But of course, any piece of general career advice will have some exceptions)
Get Involved
Research Methods
EA Creatives and Communicators Slack
Iād welcome comments suggesting other relevant links, or just sharing peopleās own thoughts on any of the topics addressed above!