Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement
Written by Jessica McCurdy and Thomas Woodside
Yale has been one of the only groups in the past advocating for a selective fellowship. However, after we noticed a couple instances of people who had barely been accepted to the fellowship becoming extremely engaged with the group, we decided to do an analysis of our scoring of applications and eventual engagement. We found no correlation.
We think this shows the possibility that some of the people we have rejected in the past could have become extremely engaged members, which seems like a lot of missed value. We are still doing more analysis using different metrics and methods. For now we are tentatively recommending that groups do not follow our previous advice about being selective if they have the capacity to take on more fellows. We recommend either guaranteeing future acceptance to those over a baseline or encouraging applicants to apply to EA virtual programs if limited by capacity. This is not to say that there is no good way of selecting fellows but rather that ours in particular was not effective.
Rationale for Being Selective & Relevant Updates
These have been our reasons for being selective in the past and our updated thoughts
Only the most excited applicants participate (less engaged fellows who have poor attendance or involvement can set unwanted norms)
By emphasizing the time commitment in the interviews and making it easy for applicants to postpone doing the fellowship hopefully we will self select for this.
Fellows are incentivized show up and be actively engaged (since they know they are occupying a spot another person did not receive)
The application and interview process alone should create the feeling of selectiveness even if we don’t end up being that selective.
We only need a few moderators that we are confident will be friendly, welcoming, and knowledgeable about EA
We were lucky enough to have several previous fellows who fit this description.
Now that there is training available for facilitators we hope to skill up new ones quickly.
We made it a lot easier to become and be a facilitator by separating that role from the organizers role.
We create a stronger sense of community amongst Fellows
This is still a concern
Each Fellow can receive an appropriate amount of attention since organizers get to know each one individually
This is still a concern though in the past Fellowship organizers were also taking on many different roles and now we have one person now whose only role is to manage the fellowship.
We don’t strain our organizing capacity and can run the Fellowship more smoothly
This is still a concern but the previous point also applies here
Overall, we still think these are good and important reasons for keeping the fellowship smaller. However, we are currently thinking that the possibility of rejecting an applicant who would have become really involved outweighs these concerns.*
*Although, there is an argument to be made that these people would have found a way to be involved anyways.
How we Measured Engagement and Why we Chose it
How we measured it
We brainstormed ways of being engaged with the group and estimated a general ranking for them. We ended up with:
Became YEA President >
Joined YEA Exec >
Joined the Board OR
Became a regular attendee of events and discussion groups OR
Became a mentor (after graduating) >
Became a Fellowship Facilitator (who is not also on the board) OR
Did the In-Depth Fellowship >
Became a top recruiter for the Fellowship OR
Had multiple 1-1 outside of the fellowship OR
Asked to be connected to the EA community in their post- graduation location OR
Attended the Student Summit >
Came to at least three post-fellowship events/1-1s >
Came to at least one post-fellowship event/1-1 >
Had good Fellowship attendance >
Did not drop out of the Fellowship (unless for particularly good reason) >
Dropped out of the fellowship never to be seen again
We ranked each set of fellows separately so we were ranking at most 17 people at a time. If people had the same level of engagement we gave them the same rank.
One potential issue is that this selects for people who are more interested in management/operations as becoming YEA President and YEA exec are at the top. However, we do think those roles are the ones that show the most engagement with the group. This method is overall imperfect and there was some subjectivity involved which is not ideal. However, we think that the rankings ended up being pretty accurate in depicting engagement and more accurate than if we had tried to assign “points” to all of the above.
Why we chose engagement
While we do not think engagement with EA is a perfect measure of the impact we try to achieve through the fellowship we think it does a decent job capturing that impact and is the easiest for us to use. For instance:
Many of YEA’s most engaged members have gone on to pursue high impact careers and have cited their continued engagement with the group as a large influence of this.
We have been following CEA’s funnel model of movement building which engagement is a decent metric for.
While we do post-fellowship surveys, we are unsure of whether the results hold over a longer period of time and answers such as “the fellowship had a significant impact on my career trajectory or donation plans” are ambiguous and difficult to quantify.
Since we have someone who has been running the group since the first revision of our fellowship in 2018, she was able to identify which members became the most engaged relatively easily.
Limitations to engagement as a metric
There is reason to believe that the CEA funnel model and measuring engagement more generally neglects the individual journey and non-linearity of many paths into EA
There is the possibility that the Fellowship had a significant impact on a participant and their future path but that fellow chose not to stay engaged with Yale’s group. Some reasons for this might be:
They spend a significant amount of time on other high impact projects and didn’t have time to get involved with the group
They didn’t like the social atmosphere of the group or did not generally mesh well with the current community members
They didn’t find any of the other programs offered by our group particularly valuable or enticing and didn’t want to help organize
There was a latency period in the effect of the fellowship and they only realize its impact after graduating and entering the workforce
For each person we interviewed, we gave a composite interview score, which was the sum of their scores across the score categories. In practice, however, the raw scores were never particularly important. Rather, the important thing was the relative ranking of scores: the top ~15 people would all be admitted to the fellowship regardless of their raw score. For people who ultimately went on to do the fellowship, we later gave an engagement score. Again, we felt most confident ranking people by engagement, rather than trying to precisely quantify that engagement.
We chose to use Spearman’s Rho to evaluate the rankings. Spearman’s Rho is a measure of correlation that only considers the relative ordering of scores, rather than their absolute values, which we felt was more appropriate here. We allowed ties in both rankings, so p-values below are not considered exact.
We completed the evaluation on data from Spring 2019, Fall 2019, Spring 2020, and Summer 2020 (Fall 2020 is too recent to properly score engagement). The data can be seen in Table 1.
Table 1: Spearman’s Rho Correlation of Interview Scores and Eventual Engagement
|Semester||Observed rho (the closer to 1, the better interview scores predict engagement)||p-value (H0: rho = 0)|
* Summer 2020 was an unusual fellowship, because we accepted students from outside Yale and got over 80 applicants. Regardless, our interview rankings were still not good predictors of engagement.
For all four fellowships, there was no significant correlation between fellows’ interview scores and engagement scores, and observed effects were quite small in any case. In other words, our scoring failed to predict what it was intended to predict.
Of course, we do not know what would have happened with applicants who we did not admit. However, we suspect that if we could not differentiate between the top k fellows who we admitted, there is no evidence to suggest that applicant #k, who was admitted, was any better than applicant #k+1, who was not. As a result, we believe there is no evidence to suggest that the applicants we admitted were any better than those we did not (in terms of their likelihood to engage further).
Our plan this semester
We will use our same application and run interviews but accept everyone over a baseline. Since we will have 6 facilitators we can have up to 30 fellows in this semester’s cohorts. If we have more than 30 people who are over the baseline then we will guarantee the excess applicants a spot in a future cohort as long as they apply again.
Rationale and Details
This semester we were lucky enough to get 6 previous fellows to volunteer to facilitate. In the past we would only have a few facilitators who we knew were very familiar with EA and experienced with leading discussions. This semester, however, we have brought on some facilitators whose only experience with EA was the Fellowship last semester. However, we thought they were particularly good in discussions and would make good facilitators so we invited them to facilitate. We also had them go through the facilitator training for EA Virtual Programs.
Keeping our desired format for the fellowship, having 6 facilitators will allow us to have up to 30 fellows. We will still use our same application and run interviews as we feel they help set the standards high for the fellowship and weed out people who can’t commit. We plan on accepting everyone over a baseline. This includes everyone who filled out the entire application, showed up to their interview, are willing to commit the necessary amount of time, and didn’t say things that seem to directly contrast with EA (suggesting they would definitely not be receptive). If there are more than 30 people over the baseline who apply then we will give the excess applicants* a guaranteed spot in a future cohort given that they apply again.
We have guaranteed future spots in the past and have had several people take us up on the offer. This has generally been well-received and adds another filter for commitment. We make sure to send these applicants a personalized email explaining that we would really like them to do the fellowship but we simply don’t have the capacity this semester. We will also give them the option to share particularly strong reasons for doing it this semester (such as it being there only one with a light course load). Since we usually have at least 1-2 accepted applicants decide not to do the Fellowship, we can give people with strong reasons those spots.
EA Virtual Programs is currently in a trial stage but if it goes well they hope to be having new batches of fellows every month. If this happens, encouraging applicants to apply there could be a great option.
*We will prioritize keeping second semester seniors, final year graduate students and students who have a particularly strong reason to do the fellowship this semester (such as being on a leave of absence). We will then prioritize first-years since they are still choosing their extracurriculars and majors. Sophomores and Juniors will be the first to cut if we have more than 30 good applicants. However, we plan to emphasize the amount of time required to successfully participate in the fellowship during interviews so that those who will not have the time can self select themselves out of the running.
Surveying past fellows
It is possible that our rankings had higher correlation with other measures of engagement. As noted above, it is possible that some fellows remained highly engaged with EA ideas but for various reasons did not engage much with our group. We have not done extensive surveying of past fellows, so it is unclear how many people fit this description. In the near future, we plan to survey fellows who completed the fellowship over a year ago to ask about their longer-term engagement with EA as a whole.
Testing a new scoring system
Our old scoring system involved scoring applicants along 6 axes and had interviewers give each applicant a score of 1-5 on each of these axes. While we gave guidelines for scoring and calibrated our scores, this still has a level of subjectivity involved. We will be testing a new way of scoring participants that involves check boxes rather than subjective scores in different categories. We plan on not using these to decide who to admit this semester but rather to analyze in the future whether it was more predictive of engagement than our previous method. If it is, then we will likely switch back to being selective and will publish another post.