You mentioned in your 2021 update that you’re starting a research internship program next year (contingent on more funding) in order to identify and train talented researchers, and therefore contribute to EA-aligned research efforts (including your own).
Besides offering similar internships, what do you think other EA orgs could do to contribute to these goals? What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?
I am very hopeful the internship program will let us identify, take on, and train many more staff than we could otherwise and then either hire them directly or be able to recommend them to other organizations.
While I am wary of recommending unpaid labor (that’s why our internship is paid), I otherwise think one of the best ways for a would-be researcher to distinguish themselves is writing a thoughtful and engaging EA Forum post. I’ve seen a lot of great hires distinguish themselves like this.
Other than open more researcher jobs and internships, I think other EA orgs could perhaps contribute by writing advice and guides about research processes or by offering more “behind the scenes” content on how different research is. done.
Lastly, in my personal opinion, I think we should also do more to create an EA culture where people don’t feel like the only way they can contribute is as a researcher. I think the role gets a lot more glamor than it deserves and many people can contribute a lot from earning to give, working in academia, working in politics, working in a non-EA think tank, etc.
I’m happy to see an increase in the number of temporary visiting researcher positions at various EA orgs. I found my time visiting GPI during their Early Career Conference Programme very valuable (hint: applications for 2021 are now open, apply!) and would encourage other orgs to run similar sorts of programmes to this and FHI’s (summer) research scholars programme. I’m very excited to see how our internship program develops as I really enjoy mentoring.
I think I was competitive for the RP job because of my T-shaped skills, broad knowledge in lots of EA-related things but also specialised knowledge in a specific useful area, economics in my case. Michael Aird probably has the most to say about developing broad knowledge given how much EA content he has consumed in the last couple of years, but in general reading things on the Forum and actively discussing them with other people (perhaps in a reading group) seems to be the way to develop in this area. Developing specialised skills obviously depends a lot on the skill, but graduate education and relevant internships are the most obvious routes here.
I already strongly agreed with your first paragraph in a separate answer, so I’ll just jump in here to strongly agree with the second one too!
Michael Aird probably has the most to say about developing broad knowledge given how much EA content he has consumed in the last couple of years
I can confirm that I’ve been gobbling up EA content rather obsessively for the last 2 years. If anyone’s interested in what this involved and how many hours I spent on it, I describe that here.
What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?
I think this is a relatively minor thing, but trying to become close to perfectly calibrated (aka being able to put precise numbers on uncertainty) on some domains seem like a moderate-sized win, at very low cost.
I mainly believe this because I think the costs are relatively low. My best guess is that the majority of EAs can become close to perfectly calibrated on trivia numerical questions in much less than 10 hours of deliberate practice, and my median guess is for the amount of time needed is around 2 (eg practice here).
I want to be careful with my claims here. I think sometimes people have the impression that getting calibrated is synonymous with rationality, or intelligence, or judgement. I think this is wrong:
Concretely, I just don’t think being perfectly calibrated is that big a deal. My guess is that going from median-EA levels of general calibration to perfect calibration on trivia questions is an improvement in good research/thinking by 0.2%-1%. I will be surprised if somebody becomes a better researcher by 5% via these exercises, and very surprised if they improve by 30%.
In forecasting/modeling, the main quantifiable metrics include both a) calibration (roughly speaking, being able to quantify your uncertainty) and b) discrimination (roughly speaking, how often you’re right). In the vast majority of cases, calibration is just much less important than discrimination.
There are generalizability issues with generalizing from good calibration on trivia questions to good calibration overall. The latter is likely to be much harder to train precisely, or even precisely quantify (though I’m reasonably confident that going from poor calibration on trivia to perfect calibration should generalize somewhat, Dave Bernard might have clearer thoughts on this)
Nonetheless, I’m a strong advocate for calibration practice because I think the first hour or two of practice will pay off by 1-2 orders of magnitude over your lifetime, and it’s hard to identify easy wins like that (I suspect even exercise has a less favorable cost-benefits ratio, though of course it’s much easier to scale).
Misc thoughts on “What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?”
There was some relevant discussion here. Ideas mentioned there include:
getting mentorship outside of EA orgs (either before switching into EA orgs after a few years, or as part of a career that remains outside of explicitly EA orgs longer-term)
working as a research assistant for a senior researcher
Good questions! I’ll split some thoughts into a few separate comments for readability.
Writing on the Forum
I second Peter’s statement that
one of the best ways for a would-be researcher to distinguish themselves is writing a thoughtful and engaging EA Forum post. I’ve seen a lot of great hires distinguish themselves like this.
(Though in some cases it might make sense to publish the post to LessWrong instead or in addition.)
This statement definitely seems true in my own case (though I imagine for some people other approaches would be more effective):
I got a offer for an EA research job before I began writing for the EA Forum. But I was very much lacking in the actual background/credentials the org said they were looking for, so I’m almost certain I wouldn’t have gotten that offer if the application process hadn’t included a work test that let me show I was a good fit despite that relevant lack of background/credentials. (I was also lucky that the org let me do the work test rather than screening me out before that.) And the work test was basically “Write an EA Forum post on [specific topic]”, and what I wrote for it did indeed end up as one of my first EA Forum/LessWrong posts.
And then this year I’ve gotten offers from ~35% of what I’ve applied to, as compared to ~7% last year, and I’d guess that the biggest factors in the difference were:
I now had an EA research role on my CV, signalling I might be a fit for other such roles
Going from 1FTE non-EA stuff (teaching) in 2019 to only ~0.3FTE non-EA stuff (a grantwriting role I did for a climate change company on the side of my ~0.7FTE EA work till around August) allowed me a lot of time to build relevant skills and knowledge
In 2020 I wrote a bunch of (mostly decently/well received) EA Forum or LessWrong posts, helping to signal my skills and knowledge, and also just “get my name out there”
“getting my name out there” was not part of my original goal, but did end up happening, and to quite a surprising degree.
Writing EA Forum and LessWrong posts helped force and motivate me to build relevant skills and knowledge
Comments and feedback from others on my EA Forum and LessWrong posts sometimes helped me build relevant skills and knowledge, or build my ideas of what was worth thinking and writing about
Factors 1 and 2 didn’t depend on me writing things on the EA Forum or LessWrong. But factors 3-5 did. So it seems that writing for the Forum and LessWrong really helped me out here. It also seems plausible that, if I’d started writing for the Forum/LW before I got my first EA job offer, that might’ve led to me getting an offer sooner than I in fact did.
(But I’m not sure how generalisable any of these takeaways are—maybe this approach suited me especially well for some reason.)
(This is more of a tangent than an answer, but might help provide some context for my other responses here and elsewhere in this AMA. Feel free to ignore it, though!)
I learned about EA in late 2018, and didn’t have much relevant expertise, experience, or credentials. I’d done a research-focused Honours year and published a paper, but that was in an area of psychology that’s not especially relevant to the sort of work that, after learning about EA, I figured I should aim towards. (More on my psych background here.) I was also in the midst of the 2 year Teach For Australia program, which involves teaching at a high school, and also wasn’t relevant to my new EA-aligned plans.
Starting then and continuing through to mid 2020 ish, I made an active effort to “get up to speed” on EA ideas, as described here.
In 2019, I applied for ~30 EA-aligned roles, mostly research-ish roles at EA orgs (though also some non-research roles or roles at non-EA orgs). I ultimately got two offers, one for an operations role at an EA org and one for a research role. I think I had relevant skills but didn’t have clear signals of this (e.g., more relevant work experience or academic credentials), so I was often rejected at the CV screening stage but often did ok if I was allowed through to work tests and interviews. And both of the offers I got were preceded by work tests.
Then in 2020, I wrote a lot of posts on the EA Forum and a decent number on LessWrong, partly for my research job and partly “independently”. I also applied for ~11 roles this year (mostly research roles, and I think all at EA orgs), and ultimately received 4 offers (all research roles at EA orgs). So that success rate was much higher, which seems to fit my theory that last year I had relevant skills but lacked clear signals of this.
So I’ve now got a total of ~1.5 years FTE of research experience, ~0.5 of which (in 2017) was academic psychology research and ~1 of which (this year) was split across 3 EA orgs. That’s obviously not enough time to be an expert, and I still have a great deal to learn on a whole host of dimensions.
Also, I only started with Rethink roughly a month ago.
This is a tangent to your tangent, but are you still based in Australia? If so, how do you find Rethink’s remote by default set up with the time difference?
For context, I considered applying for the same role, but ultimately didn’t because at the time I was stuck working from Australia with all my colleagues in GMT+0 timezone (thanks covid), and the combination of daytime isolation/late night meetings were making me pretty miserable. Is Rethink better at managing these issues?
Just want to say that Rethink Priorities is committed to being able to successfully integrate remote Australians and we’d be excited to have more APAC applicants in our future hiring rounds!
Good question. And sorry to hear you had that miserable situation—hope things are better for you now!
First, I should note that I’m in Western Australia, so things would presumably be somewhat different for people in the Eastern states. Also, of course, different people’s needs, work styles, etc. differ.
I’ve been meeting with US people in my mornings, which is working well because I wake up around 7am and start working around 8, while the people I’m meeting with are more night-owl-ish. And I’ve been meeting with people in the UK/Europe in my evenings (around 5-9pm), which I’m also fine with.
Though it is tricky to get all 3 sets of time zones in the same meeting. Usually one of us has to be up early or late. But so far those sort of group meetings have just been something like once a fortnight, so it’s been tolerable.
And other than meetings, time zones aren’t seeming to really matter for my job; most of my work and most of my communication with colleagues (via slack, google doc comments, email, etc) doesn’t require being up at the same time as someone else. (I imagine that, in general, this is true for many research roles and less true for e.g. operations roles.)
Though again, I’ve only been at Rethink for a month so far. And I’m planning to move to Oxford in March. If I was in Australia permanently, perhaps time zone issues for team meetings would become more annoying.
Btw, I also worked for Convergence Analysis (based in UK/Europe) from March to ~August from Australia. That was even easier, because there were never three quite different time zones to deal with (no US employees).
(You said “Besides offering similar internships”. But I’m pretty excited about other orgs running similar internships, and/or running programs that are vaguely similar and address basically the same issues but aren’t “internships”. So I’ll say a bit about that cluster of stuff, with apologies for sort-of ignoring instructions!)
David wrote:
I’m happy to see an increase in the number of temporary visiting researcher positions at various EA orgs. I found my time visiting GPI during their Early Conference Career Programme very valuable (hint: applications for 2021 are now open, apply!) and would encourage other orgs to run similar sorts of programmes to this and FHI’s (summer) research scholars programme. I’m very excited to see how our internship program develops as I really enjoy mentoring.
I second all of that, except swapping GPI’s Early Conference Career Programme (which I haven’t taken part in) for the Center on Long-Term Risk’s Summer Research Fellowship. I did that fellowship with CLR from mid August to mid November, found it very enjoyable and useful.
I recently made a tag for posts relevant to what I called “research training programs”. By this I mean things like FHI and CLR’s Summer Research Fellowships, Rethink Priorities’ planned internship program, CEA’s former Summer Research Fellowship, probably GPI’s Early Career Conference Programme, probably FHI’s Research Scholars Program, maybe the Open Phil AI Fellowship, and maybe ALLFED’s volunteer program. Readers interested in such programs might want to have a look at the posts with that tag.
I think that these programs might be one of the best ways to address some of the main bottlenecks in EA or at least in longtermism (I’ve thought less about areas of EA other than longtermism). What I mean is related to the claim that EA being vetting-constrained, and to Ben Todd’s claim that some of EA’s main bottlenecks at the moment are “organizational capacity, infrastructure, and management to help train people up”. There was also some related discussion here (though it’s harder to say whether that overall supported the claims I’m gesturing at).
So I’m really glad a few more such programs have recently popped up in longtermism. And I’m really excited about Rethink’s internship program (which I wasn’t involved in the planning of, and didn’t know about when I accepted the role at Rethink). And I’d be keen to see more such programs emerge over time. I think they could take a wide variety of forms, including but not limited to internships.
You mentioned in your 2021 update that you’re starting a research internship program next year (contingent on more funding) in order to identify and train talented researchers, and therefore contribute to EA-aligned research efforts (including your own).
Besides offering similar internships, what do you think other EA orgs could do to contribute to these goals? What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?
Hi Arushi,
I am very hopeful the internship program will let us identify, take on, and train many more staff than we could otherwise and then either hire them directly or be able to recommend them to other organizations.
While I am wary of recommending unpaid labor (that’s why our internship is paid), I otherwise think one of the best ways for a would-be researcher to distinguish themselves is writing a thoughtful and engaging EA Forum post. I’ve seen a lot of great hires distinguish themselves like this.
Other than open more researcher jobs and internships, I think other EA orgs could perhaps contribute by writing advice and guides about research processes or by offering more “behind the scenes” content on how different research is. done.
Lastly, in my personal opinion, I think we should also do more to create an EA culture where people don’t feel like the only way they can contribute is as a researcher. I think the role gets a lot more glamor than it deserves and many people can contribute a lot from earning to give, working in academia, working in politics, working in a non-EA think tank, etc.
I’m happy to see an increase in the number of temporary visiting researcher positions at various EA orgs. I found my time visiting GPI during their Early Career Conference Programme very valuable (hint: applications for 2021 are now open, apply!) and would encourage other orgs to run similar sorts of programmes to this and FHI’s (summer) research scholars programme. I’m very excited to see how our internship program develops as I really enjoy mentoring.
I think I was competitive for the RP job because of my T-shaped skills, broad knowledge in lots of EA-related things but also specialised knowledge in a specific useful area, economics in my case. Michael Aird probably has the most to say about developing broad knowledge given how much EA content he has consumed in the last couple of years, but in general reading things on the Forum and actively discussing them with other people (perhaps in a reading group) seems to be the way to develop in this area. Developing specialised skills obviously depends a lot on the skill, but graduate education and relevant internships are the most obvious routes here.
I already strongly agreed with your first paragraph in a separate answer, so I’ll just jump in here to strongly agree with the second one too!
I can confirm that I’ve been gobbling up EA content rather obsessively for the last 2 years. If anyone’s interested in what this involved and how many hours I spent on it, I describe that here.
There are some relevant answers in here and here.
I think this is a relatively minor thing, but trying to become close to perfectly calibrated (aka being able to put precise numbers on uncertainty) on some domains seem like a moderate-sized win, at very low cost.
I mainly believe this because I think the costs are relatively low. My best guess is that the majority of EAs can become close to perfectly calibrated on trivia numerical questions in much less than 10 hours of deliberate practice, and my median guess is for the amount of time needed is around 2 (eg practice here).
I want to be careful with my claims here. I think sometimes people have the impression that getting calibrated is synonymous with rationality, or intelligence, or judgement. I think this is wrong:
Concretely, I just don’t think being perfectly calibrated is that big a deal. My guess is that going from median-EA levels of general calibration to perfect calibration on trivia questions is an improvement in good research/thinking by 0.2%-1%. I will be surprised if somebody becomes a better researcher by 5% via these exercises, and very surprised if they improve by 30%.
In forecasting/modeling, the main quantifiable metrics include both a) calibration (roughly speaking, being able to quantify your uncertainty) and b) discrimination (roughly speaking, how often you’re right). In the vast majority of cases, calibration is just much less important than discrimination.
There are generalizability issues with generalizing from good calibration on trivia questions to good calibration overall. The latter is likely to be much harder to train precisely, or even precisely quantify (though I’m reasonably confident that going from poor calibration on trivia to perfect calibration should generalize somewhat, Dave Bernard might have clearer thoughts on this)
I think calibration matters more for generalist/secondary research (much of what RP does) than for things that either a) require relatively narrow domain expertise, like ML-heavy AI Safety research or biology-heavy biosecurity work, or b) require unusually novel thinking/insight (like much of crucial considerations work).
Nonetheless, I’m a strong advocate for calibration practice because I think the first hour or two of practice will pay off by 1-2 orders of magnitude over your lifetime, and it’s hard to identify easy wins like that (I suspect even exercise has a less favorable cost-benefits ratio, though of course it’s much easier to scale).
Misc thoughts on “What do you think individuals could do to become skilled in this kind of research and become competitive for these jobs?”
There was some relevant discussion here. Ideas mentioned there include:
getting mentorship outside of EA orgs (either before switching into EA orgs after a few years, or as part of a career that remains outside of explicitly EA orgs longer-term)
working as a research assistant for a senior researcher
I think the post SHOW: A framework for shaping your talent for direct work is also relevant.
Hi Arushi,
Good questions! I’ll split some thoughts into a few separate comments for readability.
Writing on the Forum
I second Peter’s statement that
(Though in some cases it might make sense to publish the post to LessWrong instead or in addition.)
This statement definitely seems true in my own case (though I imagine for some people other approaches would be more effective):
I got a offer for an EA research job before I began writing for the EA Forum. But I was very much lacking in the actual background/credentials the org said they were looking for, so I’m almost certain I wouldn’t have gotten that offer if the application process hadn’t included a work test that let me show I was a good fit despite that relevant lack of background/credentials. (I was also lucky that the org let me do the work test rather than screening me out before that.) And the work test was basically “Write an EA Forum post on [specific topic]”, and what I wrote for it did indeed end up as one of my first EA Forum/LessWrong posts.
And then this year I’ve gotten offers from ~35% of what I’ve applied to, as compared to ~7% last year, and I’d guess that the biggest factors in the difference were:
I now had an EA research role on my CV, signalling I might be a fit for other such roles
Going from 1FTE non-EA stuff (teaching) in 2019 to only ~0.3FTE non-EA stuff (a grantwriting role I did for a climate change company on the side of my ~0.7FTE EA work till around August) allowed me a lot of time to build relevant skills and knowledge
In 2020 I wrote a bunch of (mostly decently/well received) EA Forum or LessWrong posts, helping to signal my skills and knowledge, and also just “get my name out there”
“getting my name out there” was not part of my original goal, but did end up happening, and to quite a surprising degree.
Writing EA Forum and LessWrong posts helped force and motivate me to build relevant skills and knowledge
Comments and feedback from others on my EA Forum and LessWrong posts sometimes helped me build relevant skills and knowledge, or build my ideas of what was worth thinking and writing about
See also this other comment of mine from this AMA
Factors 1 and 2 didn’t depend on me writing things on the EA Forum or LessWrong. But factors 3-5 did. So it seems that writing for the Forum and LessWrong really helped me out here. It also seems plausible that, if I’d started writing for the Forum/LW before I got my first EA job offer, that might’ve led to me getting an offer sooner than I in fact did.
(But I’m not sure how generalisable any of these takeaways are—maybe this approach suited me especially well for some reason.)
On this, I’d also recommend Aaron Gertler’s talks Why you (yes, you) should post on the EA Forum and How you can make an impact on the EA Forum.
My own story & a disclaimer
(This is more of a tangent than an answer, but might help provide some context for my other responses here and elsewhere in this AMA. Feel free to ignore it, though!)
I learned about EA in late 2018, and didn’t have much relevant expertise, experience, or credentials. I’d done a research-focused Honours year and published a paper, but that was in an area of psychology that’s not especially relevant to the sort of work that, after learning about EA, I figured I should aim towards. (More on my psych background here.) I was also in the midst of the 2 year Teach For Australia program, which involves teaching at a high school, and also wasn’t relevant to my new EA-aligned plans.
Starting then and continuing through to mid 2020 ish, I made an active effort to “get up to speed” on EA ideas, as described here.
In 2019, I applied for ~30 EA-aligned roles, mostly research-ish roles at EA orgs (though also some non-research roles or roles at non-EA orgs). I ultimately got two offers, one for an operations role at an EA org and one for a research role. I think I had relevant skills but didn’t have clear signals of this (e.g., more relevant work experience or academic credentials), so I was often rejected at the CV screening stage but often did ok if I was allowed through to work tests and interviews. And both of the offers I got were preceded by work tests.
Then in 2020, I wrote a lot of posts on the EA Forum and a decent number on LessWrong, partly for my research job and partly “independently”. I also applied for ~11 roles this year (mostly research roles, and I think all at EA orgs), and ultimately received 4 offers (all research roles at EA orgs). So that success rate was much higher, which seems to fit my theory that last year I had relevant skills but lacked clear signals of this.
So I’ve now got a total of ~1.5 years FTE of research experience, ~0.5 of which (in 2017) was academic psychology research and ~1 of which (this year) was split across 3 EA orgs. That’s obviously not enough time to be an expert, and I still have a great deal to learn on a whole host of dimensions.
Also, I only started with Rethink roughly a month ago.
Hey Michael,
This is a tangent to your tangent, but are you still based in Australia? If so, how do you find Rethink’s remote by default set up with the time difference?
For context, I considered applying for the same role, but ultimately didn’t because at the time I was stuck working from Australia with all my colleagues in GMT+0 timezone (thanks covid), and the combination of daytime isolation/late night meetings were making me pretty miserable. Is Rethink better at managing these issues?
Cheers!
Just want to say that Rethink Priorities is committed to being able to successfully integrate remote Australians and we’d be excited to have more APAC applicants in our future hiring rounds!
Hey Harriet,
Good question. And sorry to hear you had that miserable situation—hope things are better for you now!
First, I should note that I’m in Western Australia, so things would presumably be somewhat different for people in the Eastern states. Also, of course, different people’s needs, work styles, etc. differ.
I’ve been meeting with US people in my mornings, which is working well because I wake up around 7am and start working around 8, while the people I’m meeting with are more night-owl-ish. And I’ve been meeting with people in the UK/Europe in my evenings (around 5-9pm), which I’m also fine with.
Though it is tricky to get all 3 sets of time zones in the same meeting. Usually one of us has to be up early or late. But so far those sort of group meetings have just been something like once a fortnight, so it’s been tolerable.
And other than meetings, time zones aren’t seeming to really matter for my job; most of my work and most of my communication with colleagues (via slack, google doc comments, email, etc) doesn’t require being up at the same time as someone else. (I imagine that, in general, this is true for many research roles and less true for e.g. operations roles.)
Though again, I’ve only been at Rethink for a month so far. And I’m planning to move to Oxford in March. If I was in Australia permanently, perhaps time zone issues for team meetings would become more annoying.
Btw, I also worked for Convergence Analysis (based in UK/Europe) from March to ~August from Australia. That was even easier, because there were never three quite different time zones to deal with (no US employees).
Thanks for the detailed answer—this actually sounds pretty doable!
Research training programs, and similar things
(You said “Besides offering similar internships”. But I’m pretty excited about other orgs running similar internships, and/or running programs that are vaguely similar and address basically the same issues but aren’t “internships”. So I’ll say a bit about that cluster of stuff, with apologies for sort-of ignoring instructions!)
David wrote:
I second all of that, except swapping GPI’s Early Conference Career Programme (which I haven’t taken part in) for the Center on Long-Term Risk’s Summer Research Fellowship. I did that fellowship with CLR from mid August to mid November, found it very enjoyable and useful.
I recently made a tag for posts relevant to what I called “research training programs”. By this I mean things like FHI and CLR’s Summer Research Fellowships, Rethink Priorities’ planned internship program, CEA’s former Summer Research Fellowship, probably GPI’s Early Career Conference Programme, probably FHI’s Research Scholars Program, maybe the Open Phil AI Fellowship, and maybe ALLFED’s volunteer program. Readers interested in such programs might want to have a look at the posts with that tag.
I think that these programs might be one of the best ways to address some of the main bottlenecks in EA or at least in longtermism (I’ve thought less about areas of EA other than longtermism). What I mean is related to the claim that EA being vetting-constrained, and to Ben Todd’s claim that some of EA’s main bottlenecks at the moment are “organizational capacity, infrastructure, and management to help train people up”. There was also some related discussion here (though it’s harder to say whether that overall supported the claims I’m gesturing at).
So I’m really glad a few more such programs have recently popped up in longtermism. And I’m really excited about Rethink’s internship program (which I wasn’t involved in the planning of, and didn’t know about when I accepted the role at Rethink). And I’d be keen to see more such programs emerge over time. I think they could take a wide variety of forms, including but not limited to internships.
And I’d strongly recommend aspiring or early-career researchers consider applying to such programs. See also Jsevillamol’s post My experience on a summer research programme.
(As always, these are just my personal views, not necessarily the views of other people at Rethink.)