Improving EAs’ use of non-EA options for research training, credentials, testing fit, etc.
See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole.[1] This post doesn’t necessarily represent the views of my employers.
Summary
In a previous post, I discussed observations that I think demonstrate the current pipeline for “producing” EA-aligned research and researchers are at least somewhat insufficient, inefficient, and prone to error. In another post, I overviewed various possible interventions for improving that pipeline.
This post focuses on one potential intervention: Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.[2]
Such non-EA options include courses, graduate degrees, internships, and jobs in (non-EA parts of) academia, think tanks, government, or industry
Pros of using these non-EA options include that:
They may provide better training
They may serve better for testing fit
They may provide better credentials that are more prestigious, widely recognised, credible, etc.[3]
They use up fewer “EA resources” (e.g., the scarce time of senior EA researchers)
Cons of using these non-EA options include that:
They may provide worse or less relevant training
They may serve less well for testing fit
They may provide less relevant credentials
They involve a higher chance of value drift
EAs’ use of these non-EA options could be increased and/or improved via:
Raising awareness about these options and encouraging their usage
Guiding people towards the non-EA options that suit their needs
Financially supporting the use of these options
Creating and/or improving these non-EA options
Caveats and clarifications
The caveats mentioned here and here apply to this post as well
My intention here is to quickly note some hopefully useful points, rather than to be comprehensive or groundbreaking
Other relevant writings include several 80,000 Hours career reviews (e.g., on doing an Economics PhD) and some of the posts tagged Working at EA vs Non-EA Orgs
I have little first-hand knowledge about things like PhD programs or jobs at think tanks or in government
Though I’ve supplemented that with conversations and some light research
This is not necessarily the single most important intervention option for improving the EA-aligned research pipeline, and certainly wouldn’t make all the other intervention options unimportant.
But it does seem among the most important 5-10 interventions for this goal.
What are some non-EA options for research training, credentials, etc.?
Forms these options could take include:
Courses (e.g., undergraduate degrees, research-relevant summer schools)[4]
Graduate degrees (e.g., PhD programs, Masters programs)
Volunteering
Internships
Research assistant roles
Jobs
Places these options could be found include:
Academia
Think tanks
Nonprofits
Governments, civil service, or politics
Industry
I mean this to contrast with “EA options” such as:
Applying for and doing EA-aligned research training programs or jobs at EA research orgs
Receiving mentorship from EA researchers
Doing independent research and publishing it on the EA Forum
Pros of EAs using these non-EA options
Training
These non-EA options might provide better training (for the relevant EA’s needs) than non-EA options would, because:
Many of these non-EA options (and the people working within them) are much older and more experienced than EA as a movement, specific EA projects, or the people working within them, so they can draw on more experience, iteration, etc.
Many of these non-EA options are better resourced
There are far more non-EA than EA options, so even if the average quality was about the same, we would expect the non-EA positive outliers to be more numerous and more extreme (e.g., the very top professors in an area are rarely EAs)
I think it’s at least clear that some of these non-EA options provide better training for some purposes than some EA options. But it seems less clear to me whether non-EA or EA options are “better on average”. And it seems probably more productive to think about whether non-EA or EA options are better on average for specific types of people, career plans, etc., and ideally to break down “non-EA options” and “EA options” into more fine-grained categories when thinking about this.[5] Similar caveats apply to the following points as well.
(See also this comment thread.)
Testing fit
These non-EA options will serve better for testing fit for some later roles/projects than EA options would.
Credentials
These non-EA options options might tend to provide credentials that are more prestigious, widely recognised, credible, etc. This is most relevant for later getting jobs, funding, etc. from non-EA sources.
Use up fewer EA resources
Using the non-EA options uses up fewer “EA resources”, especially the scarce time of (relatively) senior EA researchers. Other relevant resources include time spent on vetting by EA hirers or grantmakers, and time or money spent producing (or running) EA research training programs, educational materials, etc.
Cons of EAs using these non-EA options
Training
These non-EA options might provide training that’s less good for the relevant EA’s needs than EA options would, because:
It can be harder to learn about or work on high-priority topics via these non-EA options.
E.g., it can be hard to find a PhD supervisor under whom you can work on and learn about some especially high-priority questions (for reasons including there being few supervisors who are experts on those topics and those topics being harder to publish papers on in top-tier journals).
It can be harder to learn about, use, or develop methodologies, ways of thinking, ways of communicating, etc. that are particularly useful in general or for the EA’s particular needs.
For example, various rationality-related ideas, thinking in probabilities, and reasoning transparency.
(But this will of course differ depending on what specific options are being compared and what the specific EA’s future plans are.)
Testing fit
These options will serve less well for testing fit for some later roles/projects.
This is partly for the reasons noted above. It’s also partly because some non-EA options require strong commitments and involve quite low room for exploration. In particular, for a PhD, one often has to choose a relatively narrow focus in advance and stick to that for several years. And having completed most of a PhD program seems to be a much less good credential for many purposes than having completed a PhD program, which reduces the value of trying a PhD for a year or two.
Credentials
These non-EAs options will sometimes provide credentials that are less relevant, credible, etc., for the relevant EA’s needs than the credentials an EA option would provide. For example, I believe at least some people involved in hiring for EA research roles would see high-quality blog-post-style explicitly EA research as a better proxy of an applicant’s fit for their roles than the completion of a PhD program (except where the PhD is especially relevant. Additionally, in some cases, the credentials from non-EA options would be less prestigious and widely recognised—for example, in the case of an obscure online course vs a DPhil done through the Future of Humanity Institute at Oxford.
Value drift
Using non-EA options may tend to create a higher chance of value drift.
What are some ways EAs’ use of these options could be increased or improved?
Essentially, I see four main types of interventions for achieving this goal.
Raising awareness and providing encouragement
Meaning: Simply raise awareness of these options and the benefits of using them, and/or encourage their use.
Either for this whole category of options or for specific options
Either to “EAs in general” or to specific groups/individuals
Examples of this intervention type include:
This post
The post SHOW: A framework for shaping your talent for direct work
Some 80,000 Hours career reviews
Guiding EAs’ towards the most suitable options
Meaning: Help guide people to either non-EA or EA options (depending on what’s appropriate in their individual situation or type of situation), or help guide them towards the non-EA options that are particularly high-quality and suited to their needs.
This intervention type could take forms such as:
Recommendation lists
Things like 80,000 Hours’ articles
1-1 advice
This guidance could range from quite coarse-grained to quite fine-grained
E.g., it could range from “PhD programs in discipline X and jobs at think tank Y tend to be good for people who want to later do Z” to “This specific supervisor is great for learning from and is quite flexible about what people work on, as long as it’s broadly related to [field]”
Example intervention: List of useful PhD supervisors
I think someone should create a list of potential PhD supervisors who are either focused on high-priority topics or flexible enough that they’re happy to supervise work on such topics.
This seems important, tractable, and neglected
I expect this would take only a few days’ work to create a useful initial version of
This came to mind when someone highlighted to me that (1) a key barrier to EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc. is that it’s hard to find such PhD supervisors, but (2) such PhD supervisors do exist
How this list could be created:
Send a survey to EAs who have started/done PhDs or worked in academia[6]
Ask them if they know people who might be this type of PhD supervisor, either from their own experience or from hearing things from other people
Make the list accessible to relevant people, and make it possible for them to suggest additions or make comments on people already listed
Other considerations:
It’s probably best to not make this list fully publicly available
All other factors held equal, it’s of course best if these supervisors are also excellent researchers and excellent for learning from
But the list should probably just make a note of info relevant to that, rather than not mentioning supervisors who don’t seem excellent in these ways
If you’re interested in helping make that happen, please let me know, and I could put you in touch with another person who independently had a similar idea and might implement it at some point.
It could also be good to ask people what processes or proxies they used to find the relevant kind of PhD supervisor, and writing up these guidance somewhere or using it to expand the list.
Financially supporting the use of these options
Obviously this could include providing scholarships, grants, etc. to people doing graduate degrees (as is already often done by, for example, Open Philanthropy or the EA Long-Term Future Fund). Another approach is discussed below.
Example intervention: Funding EAs to work at think tanks[7]
One could fund EAs to work at prestigious think tanks alongside or under excellent researchers, perhaps on topics that the EA and/or the funder are especially keen for the EA to work on.
Advantages of this approach:
Think tanks tend to have more flexibility than academia in what they write about, as their reports don’t have to pass peer-review, fit into established journals, etc.
Think tanks’ incentives are more closely tied to funding than incentives in academia are. And apparently some (or many?) think tanks are able and willing to essentially just accept funding for a specific person to work on a specific topic (with the funder deciding on the person and the topic).
Even if those topics aren’t what the think tanks typically work on
I assume the topics have to be broadly aligned with the think tank’s focuses and that the person to be hired has to seem high-calibre, though I’m not sure about either point
(Of course, people can seek jobs their without bringing their own funding!)
(I think it would also be useful to work out which think tanks and collaborating/supervising researchers would be best for this, which would be an example of “Guiding EAs’ towards the most suitable options”, similar to creating a list of useful PhD supervisors, as discussed above.)
Creating and/or improving these non-EA options
Meaning: Work to build fields, shift incentives, shift norms, etc. such that more relevant non-EA options come into existence and/or become more useful for EAs seeking research-relevant training, credentials, testing fit, etc.
See my previous post’s section on “Increasing and/or improving research by non-EAs on high-priority topics” for further thoughts relevant to this.
---
If you have thoughts on these ideas or would be interested in implementing (with funding) projects to help with this sort of thing, please comment below, send me a message, or fill in this anonymous form. This could perhaps inform my future efforts; allow me to provide advice or connections; etc.
- ↩︎
For this post in particular, I should especially thank Nora Ammann, Edo Arad, Alexis Carlier, Peter Hurford, and an Anonymous Intellectual Benefactor.
- ↩︎
I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”.
- ↩︎
Here I use “credentials” as shorthand for something like “credible signals of fit”, which can include not just completed degrees and work experience but also published outputs, strong letters of recommendation, etc.
- ↩︎
Perhaps also “bootcamps” that are analogous to coding bootcamps but that are more relevant to research. But I don’t know if such things exist.
- ↩︎
I think some of this thinking has been done and written up, for example in some 80,000 Hours career reviews, but I expect there’s room for more valuable work here.
- ↩︎
Or just have conversations with them, but that seems less good.
- ↩︎
This idea, and several of the specific points I make, are based on a conversation with someone who’s been thinking about this as an intervention for improving the EA-aligned research pipeline.
For more on “Example intervention: Funding EAs to work at think tanks”, see here. That post and those notes are specific to the US system; I’m not sure it would work (or at least work the same way) in other systems. Think tanks are also much bigger parts of the policy research ecosystem in the US than in other countries. I’m a big fan of this model, but I’m not sure anyone has checked whether it could work outside of the US context.
A couple of other caveats:
I don’t think this is true. Think tank researchers indeed face fewer journal/peer review constraints, but they have some additional ones, especially perceptions of policy relevance. There are academic journals/conferences for most topics, but you’re going to have a hard time finding a think tank interested in speculative longtermist research. My guess is a large majority (probably >75%) of EA researchers (even those who would self-identify as being interested in “policy”) would have a rather hard time with think tank constraints.
From a think tank perspective, there is a big difference between flexible individual-level funding and individual-level funding to work on a specific topic from a specific perspective. Most think tanks are very sensitive about the optics of being “bought” by outside interests. They’re fine with outside funding and eager for free labor, but I think many (especially reputable/high-quality) think tanks would not want to accept someone who comes in saying “I come to you from X funder and they want me to write Y and Z.” The easiest way to get around this issue is joining a think tank that has overlapping interests (e.g. if you want to work on nuclear nonproliferation, you can join the Nuclear Threat Initiative or the Arms Control Association teams already working on that issue).
Nice, thanks for that info! I’ll check out that post soon, and might reach out to you with questions at some point.