Improving the EA-aligned research pipeline: Sequence introduction
This post doesn’t necessarily represent the views of my employers.
There are many people who have the skills and desire to do EA-aligned research, or who could develop such skills via some experience, mentorship, or similar.[1]
There are many potentially high-priority open research questions that have been identified.
And there are many funders who would be happy to pay for high-quality research on such questions.
Sounds like everything must be lining up perfectly, right?
In my view,[2] the answer is fairly clearly “No”, and getting closer to a “Yes” could be very valuable. The three ingredients mentioned above do regularly combine to give us new, high-quality research and researchers, but:
This is happening slower than we’d like
At any given time, we still have a lot of each ingredient left over
This is requiring more “overhead” than seems ideal
E.g., lots of 1-1 career advice, coaching, and mentorship from experienced people; time-consuming hiring and grant evaluation processes
There are more “misfires” than we’d like
E.g., aspiring researchers choosing low-priority questions or tackling questions poorly; great people and projects being passed over for hiring or funding
In this sequence, I try to:
Provide a clearer description of what I see as the “problem”, its drivers, and its consequences
Outline some goals we might have when designing interventions to improve the EA research pipeline
Overview 18 interventions options that seem worth considering[3]
Describe one of those intervention options in more detail, in hopes that that leads to either a good argument against that option or to someone actually building it.
Target audience
This sequence is primarily intended to inform people who are helping implement or fund interventions to improve the EA-aligned research pipeline, or who could potentially do so in future.
This sequence may also help people who hope to themselves “enter” and “progress through” the EA-aligned research pipeline.
Epistemic status / caveats for the sequence
I’m confident that these posts will usefully advance an important discussion. That said, I expect my description of the “problem” and my list of “goals” could be at least somewhat improved. And it’s possible that some of my ideas for solutions are just bad and/or that I’ve missed some other, much better ideas.
I’ve done ~6 FTE months of academic research (producing one paper) and ~11 FTE months of research at EA orgs. My framings and suggestions are probably somewhat skewed towards:
non-academic research
research for EA audiences (rather than for e.g. mainstream academics or policymakers)
longtermist research and global priorities research
I’ve spent roughly 50 hours actually writing, editing, or talking about these posts. Additionally, the topics they address are probably one of the 3-10 things I’ve spent the most time thinking about since early 2020. That said, there are various relevant bodies of evidence and literature that I haven’t dived into, such as metascience.
It also seems worth saying explicitly that:
Many people should do work other than EA-aligned research
This includes even many people who have the skills and desire to do EA-aligned research (since something else might be an even better fit for them, or even more impactful)
Indeed, I think one thing we should want from improvements to the EA research pipeline is a reduction in how much time people who actually shouldn’t do EA-aligned research spend trying, training for, or pursuing such work
EA-aligned research does not necessarily have to be done at explicitly EA organisations
E.g., one could research important topics in valuable ways at a regular think tank or academic institution
See also Working at EA vs Non-EA Orgs
Related previous work
I am far from the first person to discuss this cluster of topics. The following links may be of interest to readers of this post, and some of them informed my own thinking substantially:
Posts tagged Scalably using labour and/or Research Training Programs
Benjamin Todd on what the effective altruism community most needs
And here are some links that are somewhat relevant, but less so:
After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
Posts tagged Get Involved, Working at EA vs Non-EA Orgs, and/or EA Hiring
I also previously touched on related issues in my post A central directory for open research questions.
Acknowledgements
For comments on earlier drafts of one or more of these posts, I’m grateful to Nora Ammann, Edo Arad, Jungwon Byun, Alexis Carlier, Ryan Gourley, David Janků, Julian Jamison, Peter Hurford, David Moss, David Reinstein, and Linch Zhang. For earlier discussions that did or may have informed these posts, I’m grateful to many of the same people and to Ryan Briggs, Stanislava Fedorova, Ozzie Gooen, Alex Lintz, Amanda Ngo, Jason Schukraft, and Jesse Shulman. In some places, I’m directly drawing on or remixing specific ideas from one or more of these people. That said, these posts do not necessarily represent the views of any of these people.
- ↩︎
For example, Rethink Priorities recently received ~665 applications for a summer research internship program, with only ~10 internship slots available. Given the limited slots available, we had to reject at stage 2 many applicants who seemed potentially quite promising, and reject at stage 3 some candidates we were fairly confident we’d have been happy to hire if we had somewhat more funding and management capacity.
- ↩︎
I think this also matches the views of many other people; see “Related previous work”.
- ↩︎
Yes, 18. Things got a little out of hand.
My original draft of this post briefly summarised those intervention options, but some commenters suggested that I refrain from mentioning potential solutions till readers had read and thought more about the problems and goals we’re aiming to solve. See also Hold Off On Proposing Solutions.
- Humanities Research Ideas for Longtermists by 9 Jun 2021 4:39 UTC; 151 points) (
- Notes on EA-related research, writing, testing fit, learning, and the Forum by 27 Mar 2021 9:52 UTC; 98 points) (
- What’s wrong with the EA-aligned research pipeline? by 14 May 2021 18:38 UTC; 70 points) (
- Intervention options for improving the EA-aligned research pipeline by 28 May 2021 14:26 UTC; 49 points) (
- Reasons for and against posting on the EA Forum by 23 May 2021 11:29 UTC; 32 points) (
- Improving EAs’ use of non-EA options for research training, credentials, testing fit, etc. by 11 Sep 2021 13:52 UTC; 23 points) (
- Notes on Mochary’s “The Great CEO Within” (2019) by 29 May 2021 18:53 UTC; 22 points) (
- Goals we might have when taking actions to improve the EA-aligned research pipeline by 21 May 2021 11:16 UTC; 22 points) (
- 4 Jun 2021 9:34 UTC; 11 points) 's comment on EA Infrastructure Fund: Ask us anything! by (
- 14 Dec 2021 8:17 UTC; 5 points) 's comment on A central directory for open research questions by (
- Narration: Improving the EA-aligned research pipeline: Sequence introduction by 27 Jul 2021 0:56 UTC; 3 points) (
Great initiative @MichaelA. I’m not sure what a ‘sequence’ does, but I assume this means there’ll be a series of related posts to follow, is that right?
Yeah, I think it’s basically EA Forum / LessWrong jargon for “series of posts”.
There are 4 more posts to come in this sequence, plus ~2 somewhat related posts that I’ll tack on afterwards, one of which I’ve already posted: Notes on EA-related research, writing, testing fit, learning, and the Forum
Perfect, thanks!
I’m not fully satisfied with the label I’m currently using for this topic/effort and this sequence. Here are some alternatives that I considered or that other people suggested:
Scaling the EA research pipeline
Scalably training and making use of research talent
Unlocking EA-aligned research and researchers (more, better, and more efficiently)
Scaling the EA research engine
Amplifying EA-aligned researchers
Revving up the EA research engine
Priming the pump of EA research
Engineering the EA research ecosystem
(That’s in roughly descending order of how much I like them. And of course I currently prefer the label I’m actually using at the moment.)
I think the current title of the sequence is fine and probably better than the rest of the alternatives you put!
Luke Muehlhauser recently published a new post that’s also quite relevant to the topics covered in this sequence: EA needs consultancies
See also his 2019 post Reflections on Our 2018 Generalist Research Analyst Recruiting.
I briefly discussed this with MichaelA offline, but I’m interested in which “pipe” in the pipeline this sequence is primarily covering, but also which pipe it should be primarily covering.
A central example* of the EA-aligned research pipeline might look something like
get interested in EA-> be a junior EA researcher → be an intermediate EA researcher → be a senior EA researcher .
As a junior EA researcher, I’ve mostly been reading this sequence as mostly thinking of the first pipe in this pipeline.
However I don’t have a principled reason to believe that this is the most critical component in the EA research pipeline, and I can easily think of strong arguments for later stages.
There’s a related question that’s pretty decision-relevant question for me, which is that I probably should have some principled take on what fraction of my “meta work-time” ought to be allocated to “advising/giving mentorship to others” vs “seeking mentorship and other ways to self-improve on research.”
*Though not the only possible pipeline, eg instead maybe we can recruit senior researchers directly.
Yeah, I agree that this is an important concrete question, and unfortunately I don’t have much in the way of useful general-purpose thoughts on it, except:
Mentorship/management is a really important bottleneck in EA research at the moment and seems likely to remain so, so testing or improving fit for that may be more important than one would think by default
But presumably sometimes one would sometimes improve as a mentor/manager more by just getting better at their own object-level work rather than trying to work on mentorship/management specifically?
I don’t know how often that’s the case, but people should consider that hypothesis.
People should obviously consider the specifics of their situation, indications of what they’re a good fit for, etc.
(It seems possible to work out more specific and detailed advice than that. I’d be keen for someone to do that, or to find and share what’s already been worked out. I just haven’t done it myself.)
FWIW, I think this sequence is intended to be relevant to many more “pipelines” than just that one (if we make “pipeline” a unit of analysis of the size you suggest), such as:
Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby do higher priority research and maybe do it better (since one’s worldview etc. could also influence many decisions smaller than what topic/question to focus on)
Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby in various ways support more and better research on high priority topics (e.g., by providing mentorship)
Getting junior, intermediate, or senior researchers to do higher priority research without necessarily being more EA-aligned
E.g. through creating various forms of incentives or capturing the interest of not-very-aligned people
E.g., through making it easier for researchers who are already quite EA-aligned to do high priority research, e.g. by making research on those topics more academically acceptable and prestiguous
Improving the pace, quality, dissemination, and/or use of EA-aligned research
E.g., helping people who would do EA-aligned research to do it using better tools, better mentorship, better resources, etc.
(This sequence doesn’t say much about dissemination or use, and I think that that’s a weakness of the sequence, but it’s in theory “in-scope”)
I think there’s basically a lot of pipelines that intersect and have feedback loops. I also think someone can “specialise” for learning about this whole web of issues and developing interventions for them, that many interventions could help with multiple pipes/steps/whatever, etc.
I think that this might sound frustratingly “holistic” and vague, rather than analytical and targeted. But I basically see this sequence as a fairly “birds eye view” perspective that contains within it many specifics. And as I say in the third post:
Relatedly, I don’t think this sequence has a much stronger focus on one of those pipes/paths/intervention points than on others, with the exception that I unfortunately don’t say much here about dissemination and use of research.
Hey! I’ve done an audio recording of me reading this for the EA Forum podcast (I’m going to try and get the rest of this sequence in soon)