I’m surprised by the combination of the following two survey results:
Fellows’ estimate of how comfortable they would be pursuing a research project remains effectively constant. Many start out very comfortable with research. A few decline.
and
Networking, learning to do research, and becoming a stronger candidate for academic (but not industry) jobs top the list of what participants found most valuable about the programs. (emphasis mine)
That is: on average, fellows claim they learned to do better research, but became no more comfortable pursuing a research project.
Do you think this is mostly explained by most fellows already being pretty comfortable with research?
A scatter plot of comfort against improvement in research skill could be helpful to examine different hypotheses (though won’t be possible with the current data, given how the “greatest value adds” question was phrased.
Interestingly, the increase in perceived comfort with entrepreneurial projects is larger for every org than that for research. Perhaps the (mostly young) fellows generally just get slightly more comfortable with every type of thing as they gain experience.
However, this is additional evidence that ERI programs are not increasing fellows’ self-perceived comfort with research any more than they increase fellows’ comfort with anything. It would be interesting to see if mentors of fellows think they have improved overall; it may be that changes in self-perception and actual skill don’t correlate very much.
And also note that fellows consistently ranked the programs as providing on average slightly higher research skill gain than standard academic internships (average 5.7 on a 1-10 scale where 5 = standard academic internship skill gain, see “”perceived skills and skill changes” section).
I can think of many possible theories, including:
fellows don’t become more comfortable with research despite gaining competence at it because the competence does not lead to feeling good at research (e.g. maybe they update towards research being hard, or there is some form of Dunning-Kruger type thing here, or they already feel pretty comfortable as you mention); therefore self-rated research comfort is a bad indicator and we might instead try e.g. asking their mentors or looking at some other external metric
fellows don’t actually get better at research, but still rate it as a top source of value because they want to think they did, and their comfort with research not staying the same is a more reliable indicator than them marking it as a top source of value (and also they either have a low opinion of skill gain from standard academic internships, or then haven’t experienced those and are just (pessimistically) imagining what it would be like)
The main way to answer this seems to be getting a non-self-rated measure of research skill change.
Somewhat related comment: next time, I think it could be better to ask “What percentage of the value of the fellowship came from these different components?”* instead of “What do you think were the most valuable parts of the programme?”. This would give a bit more fine-grained data, which could be really important.
E.g. if it’s true that most of the value of ERIs comes from networking, this would suggest that people who want to scale ERIs should do pretty different things (e.g. lots of retreats optimised for networking).
*and give them several buckets to select from, e.g. <3%, 3-10%, 10-25%, etc.
Yes, letting them specifically set a distribution, especially as this was implicitly done anyways in the data analysis, would have been better. We’d want to normalise this somehow, either by trusting and/or checking that it’s a plausible distribution (i.e. sums to 1), or by just letting them rate things on a scale of 1-10 and then getting an implied “distribution” from that.
Thanks for putting this together!
I’m surprised by the combination of the following two survey results:
and
That is: on average, fellows claim they learned to do better research, but became no more comfortable pursuing a research project.
Do you think this is mostly explained by most fellows already being pretty comfortable with research?
A scatter plot of comfort against improvement in research skill could be helpful to examine different hypotheses (though won’t be possible with the current data, given how the “greatest value adds” question was phrased.
I agree that this is confusing. Also note:
And also note that fellows consistently ranked the programs as providing on average slightly higher research skill gain than standard academic internships (average 5.7 on a 1-10 scale where 5 = standard academic internship skill gain, see “”perceived skills and skill changes” section).
I can think of many possible theories, including:
fellows don’t become more comfortable with research despite gaining competence at it because the competence does not lead to feeling good at research (e.g. maybe they update towards research being hard, or there is some form of Dunning-Kruger type thing here, or they already feel pretty comfortable as you mention); therefore self-rated research comfort is a bad indicator and we might instead try e.g. asking their mentors or looking at some other external metric
fellows don’t actually get better at research, but still rate it as a top source of value because they want to think they did, and their comfort with research not staying the same is a more reliable indicator than them marking it as a top source of value (and also they either have a low opinion of skill gain from standard academic internships, or then haven’t experienced those and are just (pessimistically) imagining what it would be like)
The main way to answer this seems to be getting a non-self-rated measure of research skill change.
Cool, makes sense.
Agreed. Asking mentors seems like the easiest thing to do here, in the first instance.
Somewhat related comment: next time, I think it could be better to ask “What percentage of the value of the fellowship came from these different components?”* instead of “What do you think were the most valuable parts of the programme?”. This would give a bit more fine-grained data, which could be really important.
E.g. if it’s true that most of the value of ERIs comes from networking, this would suggest that people who want to scale ERIs should do pretty different things (e.g. lots of retreats optimised for networking).
*and give them several buckets to select from, e.g. <3%, 3-10%, 10-25%, etc.
Yes, letting them specifically set a distribution, especially as this was implicitly done anyways in the data analysis, would have been better. We’d want to normalise this somehow, either by trusting and/or checking that it’s a plausible distribution (i.e. sums to 1), or by just letting them rate things on a scale of 1-10 and then getting an implied “distribution” from that.