Impressiveness: good question, but feels hard to express without going into lots of detail, so I’m going to pass.
Acceptance rate: 9/~150, then 10 out of ~250. We’re planning to take 8 in this round. The summer fellowship was 27/~300.
Some support options, briefly:
Talking with Owen, the programme director
Talking with me or other future project managers on the programme
Peer support
FHI provides opportunities for coaching and other external support
We have various structures that aim to help people with this, like 6-week project cycles, a major project in the second year, a quarterly review process and an advisory board...
For the project cycles and major projects scholars would by default have something like a project supervisor
Apologies for the brief response, writing in haste!
It could be good if someone wrote an overview of the growing number of fellowships and scholarships in EA (and maybe also other forms of professional EA work). It could include the kind of info given above, and maybe draw inspiration from Larks’ overviews of the AI Alignment landscape. I don’t think I have seen anything quite like that, but please correct me if I’m wrong.
I also think that’d be good. To hopefully somewhat address this gap (though a written overview would still be useful), I’ve now created a tag for posts related to Research Training Programs, and tagged a few relevant posts I know of.
Note that those ratios are [number starting on programme]/[number of applications]. In fact a few people were made offers and declined, so I think on the natural way of understanding acceptance rate it’s a little higher.
Out of the rejection pool, are there any avoidable failure modes that come to mind—i.e. mistakes made by otherwise qualified applicants which caused rejection? For example, in a previous EA-org application I found out that I ought to have included more detail regarding potential roadblocks to my proposed research project. This seemed like a valuable point in retrospect, but somewhat unexpected given my experience with research proposals outside of EA.
EDIT: (Thanks to Rose for for answering this question individually and agreeing to let me share her answer here) Failure modes include: Describing the value of proposed research ideas too narrowly instead of discussing long-term value. Apparent over-confidence in the description of ideas, i.e. neglecting potential road-bumps and uncertainty.
Impressiveness: good question, but feels hard to express without going into lots of detail, so I’m going to pass.
Acceptance rate: 9/~150, then 10 out of ~250. We’re planning to take 8 in this round. The summer fellowship was 27/~300.
Some support options, briefly:
Talking with Owen, the programme director
Talking with me or other future project managers on the programme
Peer support
FHI provides opportunities for coaching and other external support
We have various structures that aim to help people with this, like 6-week project cycles, a major project in the second year, a quarterly review process and an advisory board...
For the project cycles and major projects scholars would by default have something like a project supervisor
Apologies for the brief response, writing in haste!
It could be good if someone wrote an overview of the growing number of fellowships and scholarships in EA (and maybe also other forms of professional EA work). It could include the kind of info given above, and maybe draw inspiration from Larks’ overviews of the AI Alignment landscape. I don’t think I have seen anything quite like that, but please correct me if I’m wrong.
I also think that’d be good. To hopefully somewhat address this gap (though a written overview would still be useful), I’ve now created a tag for posts related to Research Training Programs, and tagged a few relevant posts I know of.
Note that those ratios are [number starting on programme]/[number of applications]. In fact a few people were made offers and declined, so I think on the natural way of understanding acceptance rate it’s a little higher.
Out of the rejection pool, are there any avoidable failure modes that come to mind—i.e. mistakes made by otherwise qualified applicants which caused rejection? For example, in a previous EA-org application I found out that I ought to have included more detail regarding potential roadblocks to my proposed research project. This seemed like a valuable point in retrospect, but somewhat unexpected given my experience with research proposals outside of EA.
EDIT: (Thanks to Rose for for answering this question individually and agreeing to let me share her answer here) Failure modes include: Describing the value of proposed research ideas too narrowly instead of discussing long-term value. Apparent over-confidence in the description of ideas, i.e. neglecting potential road-bumps and uncertainty.
One more data point: last year’s Summer Research Fellowship had an acceptance rate of 11/~90.