Improving EAs’ use of non-EA options for research training, credentials, testing fit, etc.
See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.
In a previous post, I discussed observations that I think demonstrate the current pipeline for “producing” EA-aligned research and researchers are at least somewhat insufficient, inefficient, and prone to error. In another post, I overviewed various possible interventions for improving that pipeline.
This post focuses on one potential intervention: Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.
Such non-EA options include courses, graduate degrees, internships, and jobs in (non-EA parts of) academia, think tanks, government, or industry
Pros of using these non-EA options include that:
They may provide better training
They may serve better for testing fit
They may provide better credentials that are more prestigious, widely recognised, credible, etc.
They use up fewer “EA resources” (e.g., the scarce time of senior EA researchers)
Cons of using these non-EA options include that:
They may provide worse or less relevant training
They may serve less well for testing fit
They may provide less relevant credentials
They involve a higher chance of value drift
EAs’ use of these non-EA options could be increased and/or improved via:
Raising awareness about these options and encouraging their usage
Guiding people towards the non-EA options that suit their needs
Financially supporting the use of these options
Creating and/or improving these non-EA options
Caveats and clarifications
My intention here is to quickly note some hopefully useful points, rather than to be comprehensive or groundbreaking
I have little first-hand knowledge about things like PhD programs or jobs at think tanks or in government
Though I’ve supplemented that with conversations and some light research
This is not necessarily the single most important intervention option for improving the EA-aligned research pipeline, and certainly wouldn’t make all the other intervention options unimportant.
But it does seem among the most important 5-10 interventions for this goal.
What are some non-EA options for research training, credentials, etc.?
Forms these options could take include:
Courses (e.g., undergraduate degrees, research-relevant summer schools)
Graduate degrees (e.g., PhD programs, Masters programs)
Research assistant roles
Places these options could be found include:
Governments, civil service, or politics
I mean this to contrast with “EA options” such as:
Applying for and doing EA-aligned research training programs or jobs at EA research orgs
Receiving mentorship from EA researchers
Doing independent research and publishing it on the EA Forum
Pros of EAs using these non-EA options
These non-EA options might provide better training (for the relevant EA’s needs) than non-EA options would, because:
Many of these non-EA options (and the people working within them) are much older and more experienced than EA as a movement, specific EA projects, or the people working within them, so they can draw on more experience, iteration, etc.
Many of these non-EA options are better resourced
There are far more non-EA than EA options, so even if the average quality was about the same, we would expect the non-EA positive outliers to be more numerous and more extreme (e.g., the very top professors in an area are rarely EAs)
I think it’s at least clear that some of these non-EA options provide better training for some purposes than some EA options. But it seems less clear to me whether non-EA or EA options are “better on average”. And it seems probably more productive to think about whether non-EA or EA options are better on average for specific types of people, career plans, etc., and ideally to break down “non-EA options” and “EA options” into more fine-grained categories when thinking about this. Similar caveats apply to the following points as well.
(See also this comment thread.)
These non-EA options will serve better for testing fit for some later roles/projects than EA options would.
These non-EA options options might tend to provide credentials that are more prestigious, widely recognised, credible, etc. This is most relevant for later getting jobs, funding, etc. from non-EA sources.
Use up fewer EA resources
Using the non-EA options uses up fewer “EA resources”, especially the scarce time of (relatively) senior EA researchers. Other relevant resources include time spent on vetting by EA hirers or grantmakers, and time or money spent producing (or running) EA research training programs, educational materials, etc.
Cons of EAs using these non-EA options
These non-EA options might provide training that’s less good for the relevant EA’s needs than EA options would, because:
It can be harder to learn about or work on high-priority topics via these non-EA options.
E.g., it can be hard to find a PhD supervisor under whom you can work on and learn about some especially high-priority questions (for reasons including there being few supervisors who are experts on those topics and those topics being harder to publish papers on in top-tier journals).
It can be harder to learn about, use, or develop methodologies, ways of thinking, ways of communicating, etc. that are particularly useful in general or for the EA’s particular needs.
(But this will of course differ depending on what specific options are being compared and what the specific EA’s future plans are.)
These options will serve less well for testing fit for some later roles/projects.
This is partly for the reasons noted above. It’s also partly because some non-EA options require strong commitments and involve quite low room for exploration. In particular, for a PhD, one often has to choose a relatively narrow focus in advance and stick to that for several years. And having completed most of a PhD program seems to be a much less good credential for many purposes than having completed a PhD program, which reduces the value of trying a PhD for a year or two.
These non-EAs options will sometimes provide credentials that are less relevant, credible, etc., for the relevant EA’s needs than the credentials an EA option would provide. For example, I believe at least some people involved in hiring for EA research roles would see high-quality blog-post-style explicitly EA research as a better proxy of an applicant’s fit for their roles than the completion of a PhD program (except where the PhD is especially relevant. Additionally, in some cases, the credentials from non-EA options would be less prestigious and widely recognised—for example, in the case of an obscure online course vs a DPhil done through the Future of Humanity Institute at Oxford.
Using non-EA options may tend to create a higher chance of value drift.
What are some ways EAs’ use of these options could be increased or improved?
Essentially, I see four main types of interventions for achieving this goal.
Raising awareness and providing encouragement
Meaning: Simply raise awareness of these options and the benefits of using them, and/or encourage their use.
Either for this whole category of options or for specific options
Either to “EAs in general” or to specific groups/individuals
Examples of this intervention type include:
Some 80,000 Hours career reviews
Guiding EAs’ towards the most suitable options
Meaning: Help guide people to either non-EA or EA options (depending on what’s appropriate in their individual situation or type of situation), or help guide them towards the non-EA options that are particularly high-quality and suited to their needs.
This intervention type could take forms such as:
Things like 80,000 Hours’ articles
This guidance could range from quite coarse-grained to quite fine-grained
E.g., it could range from “PhD programs in discipline X and jobs at think tank Y tend to be good for people who want to later do Z” to “This specific supervisor is great for learning from and is quite flexible about what people work on, as long as it’s broadly related to [field]”
Example intervention: List of useful PhD supervisors
I think someone should create a list of potential PhD supervisors who are either focused on high-priority topics or flexible enough that they’re happy to supervise work on such topics.
This seems important, tractable, and neglected
I expect this would take only a few days’ work to create a useful initial version of
This came to mind when someone highlighted to me that (1) a key barrier to EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc. is that it’s hard to find such PhD supervisors, but (2) such PhD supervisors do exist
How this list could be created:
Send a survey to EAs who have started/done PhDs or worked in academia
Ask them if they know people who might be this type of PhD supervisor, either from their own experience or from hearing things from other people
Make the list accessible to relevant people, and make it possible for them to suggest additions or make comments on people already listed
It’s probably best to not make this list fully publicly available
All other factors held equal, it’s of course best if these supervisors are also excellent researchers and excellent for learning from
But the list should probably just make a note of info relevant to that, rather than not mentioning supervisors who don’t seem excellent in these ways
If you’re interested in helping make that happen, please let me know, and I could put you in touch with another person who independently had a similar idea and might implement it at some point.
It could also be good to ask people what processes or proxies they used to find the relevant kind of PhD supervisor, and writing up these guidance somewhere or using it to expand the list.
Financially supporting the use of these options
Obviously this could include providing scholarships, grants, etc. to people doing graduate degrees (as is already often done by, for example, Open Philanthropy or the EA Long-Term Future Fund). Another approach is discussed below.
Example intervention: Funding EAs to work at think tanks
One could fund EAs to work at prestigious think tanks alongside or under excellent researchers, perhaps on topics that the EA and/or the funder are especially keen for the EA to work on.
Advantages of this approach:
Think tanks tend to have more flexibility than academia in what they write about, as their reports don’t have to pass peer-review, fit into established journals, etc.
Think tanks’ incentives are more closely tied to funding than incentives in academia are. And apparently some (or many?) think tanks are able and willing to essentially just accept funding for a specific person to work on a specific topic (with the funder deciding on the person and the topic).
Even if those topics aren’t what the think tanks typically work on
I assume the topics have to be broadly aligned with the think tank’s focuses and that the person to be hired has to seem high-calibre, though I’m not sure about either point
(Of course, people can seek jobs their without bringing their own funding!)
(I think it would also be useful to work out which think tanks and collaborating/supervising researchers would be best for this, which would be an example of “Guiding EAs’ towards the most suitable options”, similar to creating a list of useful PhD supervisors, as discussed above.)
Creating and/or improving these non-EA options
Meaning: Work to build fields, shift incentives, shift norms, etc. such that more relevant non-EA options come into existence and/or become more useful for EAs seeking research-relevant training, credentials, testing fit, etc.
See my previous post’s section on “Increasing and/or improving research by non-EAs on high-priority topics” for further thoughts relevant to this.
If you have thoughts on these ideas or would be interested in implementing (with funding) projects to help with this sort of thing, please comment below, send me a message, or fill in this anonymous form. This could perhaps inform my future efforts; allow me to provide advice or connections; etc.
For this post in particular, I should especially thank Nora Ammann, Edo Arad, Alexis Carlier, Peter Hurford, and an Anonymous Intellectual Benefactor. ↩︎
I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”. ↩︎
Here I use “credentials” as shorthand for something like “credible signals of fit”, which can include not just completed degrees and work experience but also published outputs, strong letters of recommendation, etc. ↩︎
Perhaps also “bootcamps” that are analogous to coding bootcamps but that are more relevant to research. But I don’t know if such things exist. ↩︎
I think some of this thinking has been done and written up, for example in some 80,000 Hours career reviews, but I expect there’s room for more valuable work here. ↩︎
Or just have conversations with them, but that seems less good. ↩︎
This idea, and several of the specific points I make, are based on a conversation with someone who’s been thinking about this as an intervention for improving the EA-aligned research pipeline. ↩︎