Thanks for writing this, its great to hear your thoughts on talent pipelines in AIS.
I agree with your model of AISC, MATS and your diagram of talent pipelines. I generally see MATS as a “next step” after AISC for many participants. Because of that, its true that we can’t cleanly compare the cost-per-researcher-produced between programs at different points in the pipeline since they are complements rather than substitutes.
A funder would have to consider how to distribute funding between these options (e.g. conversion vs. acceleration) and that’s something I’m hoping to model mathematically at some point.
I believe the “carrying capacity” of the AI safety research field is largely bottlenecked on good research leads (i.e., who can scope and lead useful AIS research projects), especially given how many competent software engineers are flooding into AIS. It seems a mistake not to account for this source of impact in this review.
Good idea, this could be a valuable follow-up analysis. To give this a proper treatment, we would need a model for how students and mentors interact to (say) produce more research and estimate how much they compliment each other.
In general, we assumed that impacts were negligible if we couldn’t model or measure them well in order to get a more conservative estimate. But hopefully we can build the capacity to consider these things!
Yes, we were particularly concerned with the fact that earlier camps were in-person and likely had a stronger selection bias for people interested in AIS (due to AI/AIS being more niche at the time) as well as a geographic selection bias. That’s why I have more trust in the participant tracking data for camps 4-6 which were more recent, virtual and had a more consistent format.
Since AISC 8 is so big, it will be interesting to re-do this analysis with a single group under the same format and degree of selection.