Just under half of the people we took I think nobody on the selection committee knew at all before the application process. For about half of the rest, they’d had a single conversation with me (& I think usually not with anyone else on the committee).
For about half of the rest, they’d had a single conversation with me (& I think usually not with anyone else on the committee).
I think we had 3-5 conversations prior to the RSP interview, the first one in 2017. Though I think “single conversation” still gives basically the correct impression as all conversations we’ve had could conceivably fit into one very long conversation. (And we spoke very irregularly, didn’t know each other well, etc.)
I also had had a few very brief online conversations with another member of the selection committee, and I had applied to an organization run by them (with partially but not completely overlapping material).
(I was counting you in the other “half of the rest”, i.e. people I’d had more contact with than a single conversation, so probably wouldn’t be regarded as “chosen from cold applications”.)
One data point that gets at something similar (i.e. to what extent did RSP recruit from people with an existing network in EA):
I was one of 9 people in the first cohort of RSP (start October 2018). Before starting:
0 of the other 8 people I knew even moderately well,
2 people I had met before in person at events once but didn’t know well (to the extent of only having exchanged <10 sentences one-on-one as opposed to in group settings during the multiple-day events we both attended),
2 additional people I had heard of e.g. from online discussions (but hadn’t directly interacted with them online),
4 people I had never heard of.
I was surprised by this, particularly the first point. (Positively, as I tend to think EA is too insular.)
I had been working at EAF/FRI (now CLR) since mid-2016, based in Berlin, and had attended several EAGs before. Overall I’d guess I was moderately well networked in EA but less so than people at key anglophone orgs such as CEA or Open Phil.
I’d also be interested in hearing how competitive places are on the programme.
Typically, how impressive were the backgrounds and ideas of those accepted onto the programme? And what’s the acceptance rate like? I heard that FHI’s shorter summer research programme was extremely competitive.
And, during the programme if you’re struggling with the freedom, e.g. choosing a topic or choosing a methodology, what are the support options available to you like?
Impressiveness: good question, but feels hard to express without going into lots of detail, so I’m going to pass.
Acceptance rate: 9/~150, then 10 out of ~250. We’re planning to take 8 in this round. The summer fellowship was 27/~300.
Some support options, briefly:
Talking with Owen, the programme director
Talking with me or other future project managers on the programme
Peer support
FHI provides opportunities for coaching and other external support
We have various structures that aim to help people with this, like 6-week project cycles, a major project in the second year, a quarterly review process and an advisory board...
For the project cycles and major projects scholars would by default have something like a project supervisor
Apologies for the brief response, writing in haste!
It could be good if someone wrote an overview of the growing number of fellowships and scholarships in EA (and maybe also other forms of professional EA work). It could include the kind of info given above, and maybe draw inspiration from Larks’ overviews of the AI Alignment landscape. I don’t think I have seen anything quite like that, but please correct me if I’m wrong.
I also think that’d be good. To hopefully somewhat address this gap (though a written overview would still be useful), I’ve now created a tag for posts related to Research Training Programs, and tagged a few relevant posts I know of.
Note that those ratios are [number starting on programme]/[number of applications]. In fact a few people were made offers and declined, so I think on the natural way of understanding acceptance rate it’s a little higher.
Out of the rejection pool, are there any avoidable failure modes that come to mind—i.e. mistakes made by otherwise qualified applicants which caused rejection? For example, in a previous EA-org application I found out that I ought to have included more detail regarding potential roadblocks to my proposed research project. This seemed like a valuable point in retrospect, but somewhat unexpected given my experience with research proposals outside of EA.
EDIT: (Thanks to Rose for for answering this question individually and agreeing to let me share her answer here) Failure modes include: Describing the value of proposed research ideas too narrowly instead of discussing long-term value. Apparent over-confidence in the description of ideas, i.e. neglecting potential road-bumps and uncertainty.
What percentage of people would you estimate were chosen from cold applications?
Just under half of the people we took I think nobody on the selection committee knew at all before the application process. For about half of the rest, they’d had a single conversation with me (& I think usually not with anyone else on the committee).
[From memory; haven’t checked carefully.]
I think we had 3-5 conversations prior to the RSP interview, the first one in 2017. Though I think “single conversation” still gives basically the correct impression as all conversations we’ve had could conceivably fit into one very long conversation. (And we spoke very irregularly, didn’t know each other well, etc.)
I also had had a few very brief online conversations with another member of the selection committee, and I had applied to an organization run by them (with partially but not completely overlapping material).
(I was counting you in the other “half of the rest”, i.e. people I’d had more contact with than a single conversation, so probably wouldn’t be regarded as “chosen from cold applications”.)
Ah, makes sense. Sorry, I think I just misread “half of the rest” as something like “the other half”.
One data point that gets at something similar (i.e. to what extent did RSP recruit from people with an existing network in EA):
I was one of 9 people in the first cohort of RSP (start October 2018). Before starting:
0 of the other 8 people I knew even moderately well,
2 people I had met before in person at events once but didn’t know well (to the extent of only having exchanged <10 sentences one-on-one as opposed to in group settings during the multiple-day events we both attended),
2 additional people I had heard of e.g. from online discussions (but hadn’t directly interacted with them online),
4 people I had never heard of.
I was surprised by this, particularly the first point. (Positively, as I tend to think EA is too insular.)
I had been working at EAF/FRI (now CLR) since mid-2016, based in Berlin, and had attended several EAGs before. Overall I’d guess I was moderately well networked in EA but less so than people at key anglophone orgs such as CEA or Open Phil.
I’d also be interested in hearing how competitive places are on the programme.
Typically, how impressive were the backgrounds and ideas of those accepted onto the programme? And what’s the acceptance rate like? I heard that FHI’s shorter summer research programme was extremely competitive.
And, during the programme if you’re struggling with the freedom, e.g. choosing a topic or choosing a methodology, what are the support options available to you like?
Impressiveness: good question, but feels hard to express without going into lots of detail, so I’m going to pass.
Acceptance rate: 9/~150, then 10 out of ~250. We’re planning to take 8 in this round. The summer fellowship was 27/~300.
Some support options, briefly:
Talking with Owen, the programme director
Talking with me or other future project managers on the programme
Peer support
FHI provides opportunities for coaching and other external support
We have various structures that aim to help people with this, like 6-week project cycles, a major project in the second year, a quarterly review process and an advisory board...
For the project cycles and major projects scholars would by default have something like a project supervisor
Apologies for the brief response, writing in haste!
It could be good if someone wrote an overview of the growing number of fellowships and scholarships in EA (and maybe also other forms of professional EA work). It could include the kind of info given above, and maybe draw inspiration from Larks’ overviews of the AI Alignment landscape. I don’t think I have seen anything quite like that, but please correct me if I’m wrong.
I also think that’d be good. To hopefully somewhat address this gap (though a written overview would still be useful), I’ve now created a tag for posts related to Research Training Programs, and tagged a few relevant posts I know of.
Note that those ratios are [number starting on programme]/[number of applications]. In fact a few people were made offers and declined, so I think on the natural way of understanding acceptance rate it’s a little higher.
Out of the rejection pool, are there any avoidable failure modes that come to mind—i.e. mistakes made by otherwise qualified applicants which caused rejection? For example, in a previous EA-org application I found out that I ought to have included more detail regarding potential roadblocks to my proposed research project. This seemed like a valuable point in retrospect, but somewhat unexpected given my experience with research proposals outside of EA.
EDIT: (Thanks to Rose for for answering this question individually and agreeing to let me share her answer here) Failure modes include: Describing the value of proposed research ideas too narrowly instead of discussing long-term value. Apparent over-confidence in the description of ideas, i.e. neglecting potential road-bumps and uncertainty.
One more data point: last year’s Summer Research Fellowship had an acceptance rate of 11/~90.