Regarding the special note on upskilling grants—are data available about the fraction of upskilling-grant recipients who are doing impactful work in the relevant subject area several years out?
We don’t have anything public, but we are conducting some retrospective evaluations, which we expect to publish eventually which will look at upskilling grants in particular (though I don’t know the timeline for publication right now).
Internally we have taken some lightweight measures to assess the value of upskilling grants and think that they are often pretty useful for accelerating people and getting people to do impactful work in or outside of the specific area they did their upskilling project in—though we hope to have more concrete and sharable data in a few months’ time.
I don’t think we have much data on the effects of these grants several years out, as we have only been making them for 1-2 years, but I think that people often move into doing impactful work pretty quickly after their upskilling grant anyway.
If this data is available, I (and I know other grantmakers) would be extremely interested in seeing it. Knowing the success rate of upskilling grants will affect how likely I am to recommend the path of upskilling/independent researh!
I don’t think we have much data on the effects of these [upskilling] grants several years out, as we have only been making them for 1-2 years, but I think that people often move into doing impactful work pretty quickly after their upskilling grant anyway.
I thought the first upskilling grant took place in early 2019?
Orpheus Lummis ($10,000): Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD
Yep, my mistake. I probably should have said, ’We have only been making a large quantity of these kinds of grants for a few years.” I’ll make a not to flag some of the early upskilling grants to the retroevals project team.
Thanks—I said “several years out” to reduce the risk of missing the effect of a grant in an analysis, rather than out of any reason to be concerned about retention. So having good results on a shorter timeframe would address what I was looking for.
To be clear, for most of the retrospective evaluation(s) we are trying to work with external group(s)* to reduce personal/institutional bias and to improve the shareability of the results, although we’re still trying to work out the details here.
*though likely still within the broader EA ecosystem as they are more familiar with the results, metrics, and communication styles that matter to us.
Yes this seems like of the AI related grants that is easiest to assess the success of over a 2 to 5 year period. Seems important to at least assess what we can!
Regarding the special note on upskilling grants—are data available about the fraction of upskilling-grant recipients who are doing impactful work in the relevant subject area several years out?
We don’t have anything public, but we are conducting some retrospective evaluations, which we expect to publish eventually which will look at upskilling grants in particular (though I don’t know the timeline for publication right now).
Internally we have taken some lightweight measures to assess the value of upskilling grants and think that they are often pretty useful for accelerating people and getting people to do impactful work in or outside of the specific area they did their upskilling project in—though we hope to have more concrete and sharable data in a few months’ time.
I don’t think we have much data on the effects of these grants several years out, as we have only been making them for 1-2 years, but I think that people often move into doing impactful work pretty quickly after their upskilling grant anyway.
If this data is available, I (and I know other grantmakers) would be extremely interested in seeing it. Knowing the success rate of upskilling grants will affect how likely I am to recommend the path of upskilling/independent researh!
I thought the first upskilling grant took place in early 2019?
Arguably the Lauren Lee grant also qualifies.
https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-recommendations
Yep, my mistake. I probably should have said, ’We have only been making a large quantity of these kinds of grants for a few years.” I’ll make a not to flag some of the early upskilling grants to the retroevals project team.
Thanks—I said “several years out” to reduce the risk of missing the effect of a grant in an analysis, rather than out of any reason to be concerned about retention. So having good results on a shorter timeframe would address what I was looking for.
To be clear, for most of the retrospective evaluation(s) we are trying to work with external group(s)* to reduce personal/institutional bias and to improve the shareability of the results, although we’re still trying to work out the details here.
*though likely still within the broader EA ecosystem as they are more familiar with the results, metrics, and communication styles that matter to us.
Yes this seems like of the AI related grants that is easiest to assess the success of over a 2 to 5 year period. Seems important to at least assess what we can!