I briefly discussed this with MichaelA offline, but I’m interested in which “pipe” in the pipeline this sequence is primarily covering, but also which pipe it should be primarily covering.
A central example* of the EA-aligned research pipeline might look something like
get interested in EA-> be a junior EA researcher → be an intermediate EA researcher → be a senior EA researcher .
As a junior EA researcher, I’ve mostly been reading this sequence as mostly thinking of the first pipe in this pipeline.
get interested in EA-> be a junior EA researcher
However I don’t have a principled reason to believe that this is the most critical component in the EA research pipeline, and I can easily think of strong arguments for later stages.
There’s a related question that’s pretty decision-relevant question for me, which is that I probably should have some principled take on what fraction of my “meta work-time” ought to be allocated to “advising/giving mentorship to others” vs “seeking mentorship and other ways to self-improve on research.”
There’s a related question that’s pretty decision-relevant question for me, which is that I probably should have some principled take on what fraction of my “meta work-time” ought to be allocated to “advising/giving mentorship to others” vs “seeking mentorship and other ways to self-improve on research.”
Yeah, I agree that this is an important concrete question, and unfortunately I don’t have much in the way of useful general-purpose thoughts on it, except:
Mentorship/management is a really important bottleneck in EA research at the moment and seems likely to remain so, so testing or improving fit for that may be more important than one would think by default
But presumably sometimes one would sometimes improve as a mentor/manager more by just getting better at their own object-level work rather than trying to work on mentorship/management specifically?
I don’t know how often that’s the case, but people should consider that hypothesis.
People should obviously consider the specifics of their situation, indications of what they’re a good fit for, etc.
(It seems possible to work out more specific and detailed advice than that. I’d be keen for someone to do that, or to find and share what’s already been worked out. I just haven’t done it myself.)
FWIW, I think this sequence is intended to be relevant to many more “pipelines” than just that one (if we make “pipeline” a unit of analysis of the size you suggest), such as:
Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby do higher priority research and maybe do it better (since one’s worldview etc. could also influence many decisions smaller than what topic/question to focus on)
Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby in various ways support more and better research on high priority topics (e.g., by providing mentorship)
Getting junior, intermediate, or senior researchers to do higher priority research without necessarily being more EA-aligned
E.g. through creating various forms of incentives or capturing the interest of not-very-aligned people
E.g., through making it easier for researchers who are already quite EA-aligned to do high priority research, e.g. by making research on those topics more academically acceptable and prestiguous
Improving the pace, quality, dissemination, and/or use of EA-aligned research
E.g., helping people who would do EA-aligned research to do it using better tools, better mentorship, better resources, etc.
(This sequence doesn’t say much about dissemination or use, and I think that that’s a weakness of the sequence, but it’s in theory “in-scope”)
I think there’s basically a lot of pipelines that intersect and have feedback loops. I also think someone can “specialise” for learning about this whole web of issues and developing interventions for them, that many interventions could help with multiple pipes/steps/whatever, etc.
I think that this might sound frustratingly “holistic” and vague, rather than analytical and targeted. But I basically see this sequence as a fairly “birds eye view” perspective that contains within it many specifics. And as I say in the third post:
When you’re currently designing, evaluating, and/or implementing an intervention for improving aspects of the EA research pipeline, you should of course also think for yourself about what goals are relevant to your specific situation.
And you should also probably consider doing things like conducting interviews or surveys with potential “users” or “experts”.
Relatedly, I don’t think this sequence has a much stronger focus on one of those pipes/paths/intervention points than on others, with the exception that I unfortunately don’t say much here about dissemination and use of research.
I briefly discussed this with MichaelA offline, but I’m interested in which “pipe” in the pipeline this sequence is primarily covering, but also which pipe it should be primarily covering.
A central example* of the EA-aligned research pipeline might look something like
get interested in EA-> be a junior EA researcher → be an intermediate EA researcher → be a senior EA researcher .
As a junior EA researcher, I’ve mostly been reading this sequence as mostly thinking of the first pipe in this pipeline.
However I don’t have a principled reason to believe that this is the most critical component in the EA research pipeline, and I can easily think of strong arguments for later stages.
There’s a related question that’s pretty decision-relevant question for me, which is that I probably should have some principled take on what fraction of my “meta work-time” ought to be allocated to “advising/giving mentorship to others” vs “seeking mentorship and other ways to self-improve on research.”
*Though not the only possible pipeline, eg instead maybe we can recruit senior researchers directly.
Yeah, I agree that this is an important concrete question, and unfortunately I don’t have much in the way of useful general-purpose thoughts on it, except:
Mentorship/management is a really important bottleneck in EA research at the moment and seems likely to remain so, so testing or improving fit for that may be more important than one would think by default
But presumably sometimes one would sometimes improve as a mentor/manager more by just getting better at their own object-level work rather than trying to work on mentorship/management specifically?
I don’t know how often that’s the case, but people should consider that hypothesis.
People should obviously consider the specifics of their situation, indications of what they’re a good fit for, etc.
(It seems possible to work out more specific and detailed advice than that. I’d be keen for someone to do that, or to find and share what’s already been worked out. I just haven’t done it myself.)
FWIW, I think this sequence is intended to be relevant to many more “pipelines” than just that one (if we make “pipeline” a unit of analysis of the size you suggest), such as:
Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby do higher priority research and maybe do it better (since one’s worldview etc. could also influence many decisions smaller than what topic/question to focus on)
Getting junior, intermediate, or senior researchers to be more EA-aligned and thereby in various ways support more and better research on high priority topics (e.g., by providing mentorship)
Getting junior, intermediate, or senior researchers to do higher priority research without necessarily being more EA-aligned
E.g. through creating various forms of incentives or capturing the interest of not-very-aligned people
E.g., through making it easier for researchers who are already quite EA-aligned to do high priority research, e.g. by making research on those topics more academically acceptable and prestiguous
Improving the pace, quality, dissemination, and/or use of EA-aligned research
E.g., helping people who would do EA-aligned research to do it using better tools, better mentorship, better resources, etc.
(This sequence doesn’t say much about dissemination or use, and I think that that’s a weakness of the sequence, but it’s in theory “in-scope”)
I think there’s basically a lot of pipelines that intersect and have feedback loops. I also think someone can “specialise” for learning about this whole web of issues and developing interventions for them, that many interventions could help with multiple pipes/steps/whatever, etc.
I think that this might sound frustratingly “holistic” and vague, rather than analytical and targeted. But I basically see this sequence as a fairly “birds eye view” perspective that contains within it many specifics. And as I say in the third post:
Relatedly, I don’t think this sequence has a much stronger focus on one of those pipes/paths/intervention points than on others, with the exception that I unfortunately don’t say much here about dissemination and use of research.